<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://vrl.cs.brown.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nathan+Malkin</id>
	<title>VrlWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://vrl.cs.brown.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nathan+Malkin"/>
	<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php/Special:Contributions/Nathan_Malkin"/>
	<updated>2026-04-19T22:52:32Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=5876</id>
		<title>Plans and Goals</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=5876"/>
		<updated>2012-01-24T01:43:05Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Nathan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page the members of the [[VRL]] record and refine their goals for the current semester.  This is a living document in which [[dhl]] will provide feedback.  See the bottom of the page for links to past plans &amp;amp; goals documents.&lt;br /&gt;
&lt;br /&gt;
== Current Schedule ==&lt;br /&gt;
Meetings are on Tuesdays.  The authoritative list is in dhl&#039;s calendar. (updated here 9/24/10)&lt;br /&gt;
&lt;br /&gt;
== Current Plans and Goals (Spring &#039;12) ==&lt;br /&gt;
&lt;br /&gt;
=== [[User:Brad Berg|Brad]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Cagatay Demiralp|Çağatay]] === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Caroline Ziemkiewicz|Caroline]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== David ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Eni ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Jadrian Miles|Jadrian]] ===&lt;br /&gt;
&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal original 2009-06-22.pdf‎|Initial thesis proposal, submitted 2009-06-22]]&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal rev2 2009-10-15.pdf|Proposal second revision, prepared 2009-10-15]]&lt;br /&gt;
&lt;br /&gt;
{{User:Jadrian Miles/OKRs}}&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Nathan Malkin|Nathan]] ===&lt;br /&gt;
&lt;br /&gt;
* UTRA application?&lt;br /&gt;
* Independent research&lt;br /&gt;
*# API for experimental platform to interact with Mechanical Turk&lt;br /&gt;
*# Using games from experimental economics (Prisoner&#039;s Dilemma, Trust Game, Ultimatum Game), attempt to replicate some of their findings&lt;br /&gt;
*# Test the effect of the incentive level on people&#039;s behavior&lt;br /&gt;
*# Test the effects various manifestations of online identity have on behavior&lt;br /&gt;
*#* Attempt to replicate findings with faces increasing other-regarding behavior&lt;br /&gt;
*#* Repeat experiment with avatars&lt;br /&gt;
*#* Also names and nicknames&lt;br /&gt;
*# Test effects of different interface elements&lt;br /&gt;
*#* Colors, borders, overall &amp;quot;prettiness&amp;quot;, ...&lt;br /&gt;
*# Test effects of priming&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Radu === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Wenjin ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Steven Gomez|Steve]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Ryan ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Hua Guo|Hua]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Past Plans and Goals ==&lt;br /&gt;
* [[/Summer-Fall 2011|Summer-Fall &#039;11]]&lt;br /&gt;
* [[/Spring 2011|Spring &#039;11]]&lt;br /&gt;
* [[/Summer-Fall 2010|Summer-Fall &#039;10]]&lt;br /&gt;
* [[/Spring 2010|Spring &#039;10]]&lt;br /&gt;
* [[/Fall 2009|Fall &#039;09]]&lt;br /&gt;
* [[/Summer 2009|Summer &#039;09]]&lt;br /&gt;
* [[/Spring 2009|Spring &#039;09]]&lt;br /&gt;
* [[/Fall 2008|Fall &#039;08]]&lt;br /&gt;
* [http://sites.google.com/a/vis.cs.brown.edu/collaboravis/Home/summer-08-group-goals Summer &#039;08]&lt;br /&gt;
&lt;br /&gt;
[[Category:VRL]]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=5875</id>
		<title>Plans and Goals</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=5875"/>
		<updated>2012-01-24T01:42:39Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Nathan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page the members of the [[VRL]] record and refine their goals for the current semester.  This is a living document in which [[dhl]] will provide feedback.  See the bottom of the page for links to past plans &amp;amp; goals documents.&lt;br /&gt;
&lt;br /&gt;
== Current Schedule ==&lt;br /&gt;
Meetings are on Tuesdays.  The authoritative list is in dhl&#039;s calendar. (updated here 9/24/10)&lt;br /&gt;
&lt;br /&gt;
== Current Plans and Goals (Spring &#039;12) ==&lt;br /&gt;
&lt;br /&gt;
=== [[User:Brad Berg|Brad]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Cagatay Demiralp|Çağatay]] === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Caroline Ziemkiewicz|Caroline]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== David ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Eni ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Jadrian Miles|Jadrian]] ===&lt;br /&gt;
&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal original 2009-06-22.pdf‎|Initial thesis proposal, submitted 2009-06-22]]&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal rev2 2009-10-15.pdf|Proposal second revision, prepared 2009-10-15]]&lt;br /&gt;
&lt;br /&gt;
{{User:Jadrian Miles/OKRs}}&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Nathan Malkin|Nathan]] ===&lt;br /&gt;
&lt;br /&gt;
* Independent research&lt;br /&gt;
*# API for experimental platform to interact with Mechanical Turk&lt;br /&gt;
*# Using games from experimental economics (Prisoner&#039;s Dilemma, Trust Game, Ultimatum Game), attempt to replicate some of their findings&lt;br /&gt;
*# Test the effect of the incentive level on people&#039;s behavior&lt;br /&gt;
*# Test the effects various manifestations of online identity have on behavior&lt;br /&gt;
*#* Attempt to replicate findings with faces increasing other-regarding behavior&lt;br /&gt;
*#* Repeat experiment with avatars&lt;br /&gt;
*#* Also names and nicknames&lt;br /&gt;
*# Test effects of different interface elements&lt;br /&gt;
*#* Colors, borders, overall &amp;quot;prettiness&amp;quot;, ...&lt;br /&gt;
*# Test effects of priming&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Radu === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Wenjin ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Steven Gomez|Steve]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Ryan ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== [[User:Hua Guo|Hua]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- please don&#039;t edit below this line --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Past Plans and Goals ==&lt;br /&gt;
* [[/Summer-Fall 2011|Summer-Fall &#039;11]]&lt;br /&gt;
* [[/Spring 2011|Spring &#039;11]]&lt;br /&gt;
* [[/Summer-Fall 2010|Summer-Fall &#039;10]]&lt;br /&gt;
* [[/Spring 2010|Spring &#039;10]]&lt;br /&gt;
* [[/Fall 2009|Fall &#039;09]]&lt;br /&gt;
* [[/Summer 2009|Summer &#039;09]]&lt;br /&gt;
* [[/Spring 2009|Spring &#039;09]]&lt;br /&gt;
* [[/Fall 2008|Fall &#039;08]]&lt;br /&gt;
* [http://sites.google.com/a/vis.cs.brown.edu/collaboravis/Home/summer-08-group-goals Summer &#039;08]&lt;br /&gt;
&lt;br /&gt;
[[Category:VRL]]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule/Steve&amp;diff=5610</id>
		<title>CS295J/Project Schedule/Steve</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule/Steve&amp;diff=5610"/>
		<updated>2011-10-24T17:19:47Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: CS295J/Project Schedule/Steve moved to CS295J/Project Schedule:Steve over redirect: I lied.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[CS295J/Project Schedule:Steve]]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Steve&amp;diff=5609</id>
		<title>CS295J/Project Schedule:Steve</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Steve&amp;diff=5609"/>
		<updated>2011-10-24T17:19:47Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: CS295J/Project Schedule/Steve moved to CS295J/Project Schedule:Steve over redirect: I lied.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Steve&#039;s Project Schedule ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 1&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 8&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 15&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 22&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 29&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dec 6&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dec 13&#039;&#039;&#039; -- Final draft, two-page extended abstract of research project&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Steve&amp;diff=5607</id>
		<title>CS295J/Project Schedule:Steve</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Steve&amp;diff=5607"/>
		<updated>2011-10-24T17:19:13Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: CS295J/Project Schedule:Steve moved to CS295J/Project Schedule/Steve: better structure, easier for navigation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Steve&#039;s Project Schedule ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 1&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 8&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 15&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 22&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nov 29&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dec 6&#039;&#039;&#039; --&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dec 13&#039;&#039;&#039; -- Final draft, two-page extended abstract of research project&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5504</id>
		<title>CS295J/Review Response</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5504"/>
		<updated>2011-10-04T16:34:23Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Review #6 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Full Proposal (SI2-SSI) Review and Response ==&lt;br /&gt;
&lt;br /&gt;
Dear reviewers&lt;br /&gt;
&lt;br /&gt;
Thank you so much for your insightful, constructive, and helpful comments.  We particularly appreciated the positive evaluation of the proposal and the potential of our methods for enhancing visual data analysis with regards to brain functioning.  &lt;br /&gt;
&lt;br /&gt;
We are overwhelmingly grateful to have received a &#039;competitive&#039; summary evalution.&lt;br /&gt;
&lt;br /&gt;
We have noted and addressed the criticisms below in the context of the reviews received.  Our responses and details concerning our revisions are in bold.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
&lt;br /&gt;
David Laidlaw et al.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;1.&#039;&#039;&#039; &#039;&#039;The panel&#039;s discussion focused on the sustainability issue beyond the end of the project support timeframe. While a section of the proposal talks about the Outreach, Education, and Sustainability Plan, the community outreach and sustainability aspects are treated somewhat cursorily. These two issues are closely related: without community support, the software is unlikely to be sustainable in the long run. On the other hand, the proposers appear to be well known in their field, which may enhance community uptake. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As added in section G, the plan to create long-term sustainability now includes more detail. Specifically we will focus on creating a lasting open source community that provides frequent updates to the code centralized by the core research team following a Macro R&amp;amp;D development infrastructure.&#039;&#039;&#039; ([[User: Stephen Brawner | Stephen Brawner]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;2.&#039;&#039;&#039; &#039;&#039;The proposal was perhaps over-ambitious; aspects of the evaluation (for example, eye tracking) will create large amounts of data that will require correspondingly intensive data analysis. However, the panel felt that even if the project did not accomplish every detail of the proposal, it would still be highly worthwhile. Similarly, while some aspects of the project might be seen as risky, a certain amount of risk is acceptable in NSF proposals, or even expected. Moreover, any risk is mitigated by the qualifications of the PIs, as exemplified by their excellent track record. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;The proposers have laid out a detailed five year plan. One panelist questioned whether sufficient attention had been paid to issues of sustainability, in particular there was no mention made of plans for software support beyond the end of the five year plan. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1&#039;&#039;&#039; ([[User: Stephen Brawner|Stephen]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;4.&#039;&#039;&#039; &#039;&#039;The proposers see this as sort of a prototype of the way software could be developed to support other scientific endeavors; this effect appears to constitute most of the Broader Impact (I wouldn&#039;t count benefit to &amp;quot;the entire brain science research community&amp;quot;, mentioned in the BI statement, as broader impact). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;5.&#039;&#039;&#039; &#039;&#039;One quibble I have in the Management and Coordination Plan is the statement that &amp;quot;The Stanford researchers will visit Brown if face-to-face interactions become necessary.&amp;quot; While electronic communication allows collaboration in ways that would not have been possible before, I think it&#039;s not a question of whether face-to-face will be necessary, but how often. Fortunately, this appears to have been built into the travel budget.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As noted by the reviewer, face to face communication will be necessary and not an &#039;if&#039; but a &#039;when&#039;. Researchers at Brown will plan to coordinate visits in California and vice-versa.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;6.&#039;&#039;&#039; &#039;&#039;As the proposal is for SI2-SSI, I would like to know more on what the team plans to do to ensure the sustainability of the software and develop open-source community support. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;7.&#039;&#039;&#039; &#039;&#039;It is also unclear what the team will do to integrate diversity into the proposed activity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-proposal (Expeditions) Reviews and Response ==&lt;br /&gt;
:&#039;&#039;Proposal Number: 1064261&lt;br /&gt;
&lt;br /&gt;
===Panel Summary for Expeditions Preliminary Proposals===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Preliminary Proposal Summary (Vision/Goals of the Expedition) &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;1. &#039;&#039;&#039;However the proposal does not elaborate on how the PIs would use heuristic knowledge of artists and visual designers; this is only mentioned in the introduction but never developed in the proposal. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have added a section called &amp;quot;Evaluating Predictive Models&amp;quot; that outlines performance metrics the system will predict, e.g., task completion time, number of insights (see Saraiya et al., 2005), and methods for predicting user state and goals from observed user interactions and eye-tracking.  We plan to collect and compare heuristic design knowledge with predictive models of visual-search behavior using eye-tracking; one expected contribution will be validating or revising existing design guidelines.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Overall the panel had difficulty coming to a common understanding of the proposal contents. The range of reviews mirrored the range of what people had read into the proposal and their enthusiasm for the proposal topic. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;That said, the proposal failed to convince the reviewers along a number of dimensions. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;2. &#039;&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. Utilizing cognitive principles to inform this research is applauded, but we expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance.&lt;br /&gt;
&#039;&#039;&#039;We have added a section called &amp;quot;Task Analysis&amp;quot; that outlines how we will analyze and encode tasks and sub-tasks that prime or conflict them.  In a subsection titled &amp;quot;Applications&amp;quot;, we outline the design of a software module that facilitates analyst &#039;goal maintenance&#039; by collecting user input, predicting goals from this, and displaying suggestions for similar or priming sub-goals.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ] &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; &amp;quot;We have included a plan for implementing cognitive principles. Strategies (through eye-tracking and other means outlined above) will include monitoring users&#039; preferences and cognitive strategies through reaction time data and verbal reports from users. Historically, such strategies have proved successful when gathering user information, input, and comprehension; we expect the same in our implementation. We also intend to use eye-tracking and reaction time procedures in order to best gauge cognitive load to the extent that it affects user experience, as we strive to understand user adaptation and navigation strategies within our applications.&amp;quot;&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;3. &#039;&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how performance is going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;4. &#039;&#039;&#039;The proposal needs to be much more explicit about the techniques used to derive predictions including the track records of these techniques and the ways in which these techniques may need to be enhanced to be used in particular application domains. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;5. &#039;&#039;&#039;Another concern is methodological. We suggest that the team identify benchmark tasks that would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We will break down the brain circuitry exploration process into a set of simpler cognitive tasks through cognitive task analysis. Benchmark tests that involve solving simple problems related to brain circuitry analysis will then be designed based on the identified cognitive tasks. These benchmark tests can then be used to evaluate the users&#039; cognitive performance when working with our system in comparison with existing systems.&#039;&#039;&#039; [ [[User: Hua Guo|Hua]], Sep.27, 2011 ]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; As the brain exploration process may involve essential complex tasks that are not able to break down into simpler tasks, we choose to evaluate the tool qualitatively over a long-term period: observe and interview users regarding their experience with the tools in order to discover critical features that the existing tool is missing. The researcher will also provide technical support if necessary. ([[User:Diem Tran|Diem Tran]] 21:22, 28 September 2011 (EDT))&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;6. &#039;&#039;&#039;The panel thought that the proposal team could do more work in considering possibilities for broader societal impact and for designing more impactful outreach and education activities that would extend beyond the Brown community. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have highlighted some of the broader implications for our project, such as making tools publicly available for other communities of brain scientists, medical researchers, and even instructors to learn about brain connectivity through our visualization initiatives. We have also outlined plans to develop tools for elementary, middle, and high school students as they explore the brain, its anatomy, and its capabilities.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
&lt;br /&gt;
===Review #1===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;7. &#039;&#039;&#039;Other than the nominal support for traditional computing need by the computer science department at Brown, there did not appear to be any specific institutional support for this proposal. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;8. &#039;&#039;&#039;The value of the proposed work appears to be very important in this specific area of research. While this is not my specific area of expertise, I have some concern whether this project meets the over arching goals of the Expeditions in Computing Initiative. In particular, it would seem to have the potential to stimulate interest in this area for graduate students in related disciplines, but may not be very successful in drawing attention to STEM studies amongst the K-12 age group. &lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 6.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;9. &#039;&#039;&#039;I did not see anything in the proposal that indicated that it would be of particular interest to youth and underrepresented groups. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;10. &#039;&#039;&#039;The single sentence on stimulating effective knowledge transfer did not seem convincing. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Review #2===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;11. &#039;&#039;&#039;The major weakness was that I was not really clear how the software tool would eventually work. For instance, they talked a little about following pupils of the user - but I was not clear how they would capitalize off of that knowledge. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;As mentioned in the response to the introduction of the pre-proposal, we have expanded out plan about how we will incorporate pupil data from eye-tracking into creating an interface that was best optimized for data analysis ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Review #3===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;12. &#039;&#039;&#039;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process. There is no guarantee that the computer scientists and psychologists doing this research will be able understand and model the cognitive processes of a brain scientist. &lt;br /&gt;
&#039;&#039;&#039;We have included our experience of developing visualization tools for brain study at Brown, and also we have a detailed timeline, in each phase, brain scientists will be involved in the process of model evaluation, testing, and providing feedback.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;13. &#039;&#039;&#039;Except through the rather limited vehicle of scholarly publication, it is not clear how the cognitive models themselves are to be made available. The authors may want to consider using their own visualization capabilities to explain the models. &lt;br /&gt;
&#039;&#039;&#039;On page x, section y, we have included preliminary visualizations, which are examples of the visualizations we will implement to explain the cognitive models we intend to incorporate in our research.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara]] ]&lt;br /&gt;
&lt;br /&gt;
===Review #4===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Despite its laudable objective, this work is not ready for further scrutiny as a full proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;14. &#039;&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. How are these principles going to be used to design a better visualization, and how are they going to be tested? I very much like the idea of using cognitive principles, but would have expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance: &amp;quot;We will use these principles to determine which tasks to make easily accessible to users and which to put in the background.&amp;quot; This is a general problem for interface design, not a solution to the problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Reviewer 4 indicated that the proposal was not explicit about how cognitive principles would be implemented in the project. In section ___ on page __, we have elucidated how the principles listed will concretely lead to better, more optimized visualizations, as well as the studies we plan to perform to show that the visualizations are indeed efficient. ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;15. &#039;&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. This is a wild extrapolation. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how is performance going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have made a detailed timeline, and in phase/year 3, we have included the way to measure user performance and evaluate the cognitive models. We evaluate the models prediction against actual user data, in addition to eye-tracking, computer interaction logging, video logging and skin conductance response will be adopted, and the results and help refine the models.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;16. &#039;&#039;&#039;Another concern is methodological. Say that the proposal had articulated a clear plan and a reasonable set of deliverables for a new generation of visualization interfaces. Wouldn&#039;t it be better to test this interface on some benchmark problems, and see how it facilitates performance relative to a standard interface? These benchmark problems would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We mainly test the user interface performance by empirical ways. But they are based on the principles of perception and goal selection .  We also have included some standards like performance time of an average user completing a unit task . We will include a comparison between the past and present interface in one unit task performance test.  ( --- [[User:Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
:&#039;&#039;To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is very ambitious in its overall objectives. A visualization tool having the characteristics suggested in the proposal would be invaluable to brain science as well as to other scientific disciplines dealing with high-dimensional complex data, such as genomics/proteomics, geospatial analysis, network analysis, etc. However, the proposal fails to turn a high-level concept into a realizable implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Our framework allows more modules to be involved in as the system evolves in future.  The interface facilitating viewing data and the model to make prediction can be both applied to other disciplines .( --- [[User:Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Will the work contribute to realization of the EIC program goals and is it likely to demonstrate completion of these goals? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Understanding the brain is one of our greatest scientific challenges. Unfortunately, without a clear research plan it is difficult to asses the likelihood that the proposal will be able to demonstrate completion of its overall goals. &lt;br /&gt;
&#039;&#039;&#039; We have added a new section named &amp;quot;Detailed Plan&amp;quot; which describes in details our proposed plan throughout the project duration. We divide the plan into milestones, allocate manpower and budget, and provide tentative schedule for each milestone. [ [[User:Diem Tran|Diem Tran]] ] &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;17. &#039;&#039;&#039;Societal benefits of this tool would derive from its scientific merit to the extent that it would help understand the brain. Given the characteristics of this project, I wonder if other NSF funding opportunities would be more suitable, such as the FODAVA program or the interdisciplinary program in neuroscience at CISE. I also wonder whether this work should be funded instead by NIH (NIBIB, NIMH). The budget contains a request for $3,000 to cover costs of animal (mouse) care; why is this needed given that the proposal is for software development? &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;While the scientific merit of this project is most directly applicable to understanding the brain, it should be noted that the goal to “convincingly demonstrate that the employed techniques facilitate better analysis” (described in section c.1) is unique to this proposal, and would have impacts that reached into any other discipline in which vast amounts of data needs to be interpreted. [ [[User:Michael Spector|Michael Spector]] ]&lt;br /&gt;
&#039;&#039;&#039;In addition, it is our hope that much of the development that we do in creating an interface and visualizations that adapt to the user needs will be easily implemented into a much wider range of programs not only across disciplines, but also educational levels, allowing students to find a way to learn and explore that best fits their learning style and needs. [ [[User:Jenna Zeigen|Jenna Zeigen]] ]&lt;br /&gt;
&lt;br /&gt;
===Review #5===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;18. &#039;&#039;&#039;The proposal is vague and has a lot of repeatability. It is not well written. It is not clear what research experiments are performed. &lt;br /&gt;
&#039;&#039;&#039; As described in the proposal, we apply cognition and visualization principles to develop a brain circuits and connectivity visualization tool. We evaluate our tool using methodologies established in cognitive science, psychology and human computer interaction research. Detailed explanations of our approach are addressed in comments to review 1, 2, 5, 14, 16. [ [[User:Diem Tran | Diem Tran]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;19. &#039;&#039;&#039;The details of the educational plan are not given. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As noted on page 26 (section g, &amp;quot;Outreach, Education, and Sustainability Plan&amp;quot;), we plan to educate users on the use of our developed tool through the same channels as our outreach efforts; namely, to the visualization, interaction, and brain science communities. We also note that we will deploy our tool in academic settings, such as labs and classrooms in the brain sciences, which will benefit those groups from an educational point of view, as well as provide another avenue through which we can receive feedback on our software. [ [[User:Michael Spector|Michael Spector]] ]&lt;br /&gt;
&lt;br /&gt;
===Review #6===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;20. &#039;&#039;&#039;However, the proposed activities within the proposed are completely underspecified. Given the scale of the project and the challenging nature of the problems to be faced, the level of technical detail and planning presented in the proposal is insufficient and unconvincing. A great many claims are made in the proposal with no clear measurable end-points to determine how success within the project could be evaluated. Specifically, the authors claims to employ developments in concepts of cognition and perception to assist scientists reasoning. What aspects of reasoning? For which tasks? Within which discipline? If this is only related to tractography, what sort of scientific hypotheses to the applicants expect to address? Are they relating these representations of neural connectivity to studies in animals? Are they relating these analyses to other modalities of imaging data? How do they intend to reason over the complex semantics of these other experimental types? These are all glaring omissions from the proposal. The description of the cognitive science aspects of the project were marginally better specified, but I still found the details lacking. For example, the claim was made that &#039;our system will tune itself to individual work styles. How? What technical elements will the system exploit to accomplish this? In particular, the applicants must spend more time specifying the precise tasks that the system is designed to tackle before it is possible to improve or optimize performance at that task. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Response: We have added a section called &amp;quot;Task Analysis,&amp;quot; in which we outline a plan for specifying visual analysis tasks through user observation. We will begin with loosely structured observations to generate hypotheses about primary tasks and user strategies in this domain, and continue with cognitive task analysis methods to focus and test these hypotheses. We present preliminary results that show the viability of this plan.  ([[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 10:48, 22 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Concrete examples and specifications for tasks have been added to section ____. Through pilot studies and interviews, we have identified the most important tasks performed by brain scientists. The proposal now includes a preliminary analysis of cognitive principles that can be used to assist in these tasks, together with hypotheses that we plan to address. &lt;br /&gt;
&amp;lt;br /&amp;gt;[ [[User:Nathan Malkin|Nathan]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;21. &#039;&#039;&#039;In particular, the statement &amp;quot;We expect the core element to evolve significantly through the five years of the project. It cannot be meaningfully defined without the data we will acquire from users, so details beyond this overview are not possible yet&amp;quot; is incredibly revealing and suggests that the applicants themselves do not have a clear idea of how they intend to solve these problems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have included a diagram to clarify the software architecture described in the proposal.  We implemented a simple version of the architecture as a proof of concept, as described in the section &amp;quot;Preliminary Work&amp;quot;.  Individual modules will be developed and adjusted with an iterative testing schedule.&#039;&#039;&#039;  [It would also be good to add, &amp;quot;This is similar to another problem/tool that worked successfully in this cited paper&amp;quot;, but I haven&#039;t found a good example yet.] [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;22. &#039;&#039;&#039;The Figures presented in the proposal looked confusing and uninformative, adding nothing to the argument that these systems would actually help a scientist understand the underlying data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have incorporated a new set of figures into the proposal, which are more focused on the concept and design of the proposed system; in particular, we have included some figures to demonstrate the prototype design for different views of the system. [ [[User:Hua Guo|Hua]] ]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;23. &#039;&#039;&#039;The presence of high-end graphics equipment (such as a virtual-reality cave and haptic displays, etc.) is a plus for the project but also is a hindrance to enable the developers to release their work to a broader audience. If the system is only available to the small number of people who have access to such facilities, then the impact of the work would be lessened.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have expanded our dissemination plan to include multiple distributions for different end-user workstations.  Modules that require high-end hardware for online data capture (e.g., eye-tracking) or display will be disabled in the basic distributions for Windows and Mac.  Our research agenda includes experiments with this equipment -- for instance, to validate cognitive models or design guidelines -- but it is not necessary for most end users who download releases of the tool.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;24. &#039;&#039;&#039;The proposed activity makes no specific claims to target or support underrepresented groups explicitly. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;25. &#039;&#039;&#039;The underspecification of the technical aspects of the project undermine the open-source distribution of the code. It is technically demanding to generate usable open source products for other people to use. Notably, browsing the co-PIs webpages, there were no easily accessible open-source software products noticeably available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;- Since the review, the PI&#039;s lab has published a large selection of source code from previous projects, available from its website (vis.cs.brown.edu), with provisions for multiple platforms. &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;- We are undertaking a project to make anonymized images of healthy and abnormal brains (and related data), from our projects and those of a number of collaborators worldwide, freely available to the scientific community. &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;- Development will be done using a publicly visible source code repository (e.g., Github), so that other scientists and the public may be able to track progress, comment on changes, provide feedback, and even contribute pieces of code to the software. &amp;lt;br /&amp;gt;&lt;br /&gt;
[ [[User:Nathan Malkin|Nathan]] ]&lt;br /&gt;
&lt;br /&gt;
===Removed Chunks of the Proposal (not included in the response letter)===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;09/10/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This Proposal has been Electronically Signed by the Authorized Organizational Representative (AOR).&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Division of Computer and Communication Foundations&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Mitra Basu&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8649&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mbasu@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Proposal Summary&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnected data such as crime and terrorism analysis. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit &lt;br /&gt;
:&#039;&#039;The authors of this proposal are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impacts &lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Rationale for the Recommendation &lt;br /&gt;
:&#039;&#039;Despite enthusiasm for this topic and the potential for significant impact if successful, the panel could not support the pre-proposal at this time due to a lack of coherent vision and a realizable plan of implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Put an X next to the appropriate category &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Invite &lt;br /&gt;
:&#039;&#039;Invite-if-possible &lt;br /&gt;
:&#039;&#039;Do-Not-Invite	 X &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This summary was read by/to the panel, and the panel concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Do Not Invite&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;Rating: &#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnect data such as crime and terrorism analysis.	The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;The leadership team of distinguished faculty and researchers is well qualified to conduct this research. The entire team is an appropriate blend of researchers representing all of the subordinate areas of research. &lt;br /&gt;
:&#039;&#039;The quality of prior work from all participating members is uniformly outstanding and appropriate for this endeavor. &lt;br /&gt;
:&#039;&#039;While there have been much work of a similar nature done in the past, this project would move our knowledge forward on a number of new fronts. In addition, the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception is novel. &lt;br /&gt;
:&#039;&#039;The proposal is well conceived and organized and clearly presented. &lt;br /&gt;
:&#039;&#039;With the pupil tracking device requested in the budget, there appears to be sufficient access to all the required resources necessary for this undertaking. &lt;br /&gt;
:&#039;&#039;There seems to be a wealth of available experimental facilities in the associated institutions. Some of the relevant equipment includes tiled display walls, stereo-enabled desktop displays, ultra-high-resolution Wheatstone stereoscope, haptic devices, and a virtual-reality cave to come online later this year.	&lt;br /&gt;
:&#039;&#039;The leadership plan appears to be appropriate for the small size of the project personnel. The team members have a good history of interaction on related academic activities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;The budget is well thought out, clearly described, and justified.&lt;br /&gt;
:&#039;&#039;The collaboration amongst the faculty of Brown, Stanford, and the Rhode Island School of Design appears to be very appropriate for this project. The project would bring together cognitive scientists, visualization experts, and other domain specialists to bridge the gap between theory and practice in this area of brain research. Clearly the synergy in this group would help to insure the success of this work.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;A major focus of this proposal is the conceptualization, design, and development of a software framework for predicting user performance. It would gather information on specific models, user interfaces, and user goals and endeavor to produce probabilistic estimates of the state of users over time as predicted by the models. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a good general research proposal in the area of brain research and the understanding of interconnected relationships of complex data sets. There are some novel research concepts, including the heuristic knowledge of artists and visual designers related to cognition and perception. The leadership team is well qualified to lead this endeavor and the outlook for good research results look promising. The proposed budget is in line with the proposed activities and personnel commitments. This proposal should fair well as a general unsolicited proposal for NSF. I would rank this proposal to be better than many of the other proposals of the Expeditions in Computing Initiative.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;Rating: Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit: The main goal is to push the frontiers of data visualization, with secondary thrusts on improved learning in cognition about how we look at data. The strengths of the proposal are that the proposed tool is, as far as I know, ground-breaking in that it will actively change based on the user. Further, the researchers are well qualified, with expertise in the psychology as well as in the computer science.&lt;br /&gt;
:&#039;&#039;Value added:&lt;br /&gt;
:&#039;&#039;I think the proposed research could be complex and important enough to warrant this investment - however I could not find a clearly defined path of attack in the proposal. It fits well within the 3 program goals, with probably the greatest emphasis on the first. Intelligent data visualization tools that can react to the user will open many new doors in understanding science. It will impact and inspire future computer scientists, although I do no see a preference for underrepresented groups. Finally, it has the potential to stipulate new significant findings in science and in education.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership plan:&lt;br /&gt;
:&#039;&#039;The leadership plan seemed well thought out. They have a diverse set of researchers, each with their unique skill set.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impact: The work will be distributed to all who want to use it and will be used in classes at Brown, affecting students at all levels (either as developers or clients). I found their vision here a little short-sighted.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In their proposal entitled &#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools,&#039; the authors propose to use understandings from cognitive science, neuroscience and human - computer interaction to develop better tools for examining data. In particular, they will develop software to that will visualize neural connections in the brain. At the same time they will actively measure the client and use these data to predict what the client will want to see next. They present a compelling case that we need better visualization tools for understanding the brain.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;Rating: &#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The dominant approach to developing interactive systems is for the developers to interact with the envisioned users to gather the general requirements for the application and to construct software based on the developer&#039;s intuitions as to how the users will actually interact with the software. Depending on the sophistication of the organization, cycles of usability testing and re-design are used to refine the interface; alternately, they may simply release the software to the users and wait for the complaints or lack of sales.&lt;br /&gt;
:&#039;&#039;An alternative to this expensive process is to construct explicit models of the perceptual and decision making processes of the users and then use these models to inform the design process. Work on cognitive models such as GOMS and ACT began about three decades ago and has progressed slowly but steadily throughout the time period and there have been a number of small demonstrations that such models can, in fact, eliminate most or all of the iteration previously required.&lt;br /&gt;
:&#039;&#039;The authors of this are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing of neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space.&lt;br /&gt;
:&#039;&#039;Although the focus of the recent work in cognitive models has been to develop engineering models which are capable of being used outside of the research setting, use of such models in the design of interactive systems has been slow to catch on. If nothing else, construction of the models requires a large amount of intellectual labor and, to date, impressive examples of the use of these models to justify that labor investment have been rare. This work has the potential for providing such a critical example and could be the impetus to finally move cognitive models into widespread use.&lt;br /&gt;
:&#039;&#039;Additionally, work in as complex an area as brain science will ensure that the cognitive modeling tools can handle nearly any application.&lt;br /&gt;
:&#039;&#039;Finally, if the models do result in improved tools, the research may result in new findings in the brain science field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Value-added of funding the activity as an Expedition&lt;br /&gt;
:&#039;&#039;This work requires substantial commitment on the part of the computer scientists and psychologists to learn the brain science domain and on the part of the brain scientists for their interaction with the cognitive scientists. Such a commitment is unlikely to be obtained with smaller, more fragmented funding. Industry or venture capital are unlikely to fund this kind of research.&lt;br /&gt;
:&#039;&#039;The main knowledge transfer methods will be the mentoring of graduate students and the addition of formal courses intended to teach about interdisciplinary collaboration. In addition, the software they develop will be made available for distribution.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan&lt;br /&gt;
:&#039;&#039;Only two institutions are involved in this work and the senior researchers are all located at one of the two institutions. This should minimize coordination problems.&lt;br /&gt;
:&#039;&#039;Funding for the primary brain scientist in this research is at the 50% level and his supervisor has a nominal level of funding. This is a cause for concern, given the level of commitment required to support what is, essentially, someone else&#039;s area of research. A higher level of funding would be desirable, even if a substantial amount of the funded time is spent on pure brain science research.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal includes training a number of graduate and post doctoral students; in fact, most of the funding requested is for student support.&lt;br /&gt;
:&#039;&#039;As mentioned earlier, to the extent this work results in wider acceptance and usage of cognitive models, particularly in the development of scientific software, it will accelerate the construction of interactive systems which can be used efficiently.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This work should lead signficantly wider use of cognitive modeling in interactive systems design as well as provide researchers in brain science with superior tools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Review #4&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Rating: &#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Summary&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal seeks to develop new visualization techniques that will assist brain scientists with the interpretation of high-dimensional data. For this purpose, the PI will incorporate design principles and knowledge from cognitive science, neuroscience and human computer interaction. the visualization system will also capture data of scientists as they use the tool, and compare it with computational models from cognition, perception and art. The tool will also be able to predict user performance and user state over time. The tool will be released through an open-source license, and will be incorporated into two courses. The team has worked together for a number of years.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 1. What is the intellectual merit of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The kind of tool envisioned in this proposal would be invaluable, not only in neuroinformatics but also on other disciplines that deal with high-dimensional, multi-scale data, from social networks to geospatial information.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Is the work of sufficient import, scale, and/or complexity to justify this type of investment?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The brain is one of the scientific frontiers for the 21st century. The proposal has the complexity and scale worthy of this type of investment, but the proposal fails to deliver a realistic plan (if any plan at all) or even specifications.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Value of the experimental systems or shared experimental facilities proposed&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators will utilize some shared facilities in their research and will share software and data that they produce to allow further research by others. The proposed software testbed will be used across the collaborators to test models of cognition and perception in the context of HCI.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators have worked together for a number of years, have taught classes together, and their students have attended classes from each other. No leadership or collaboration plan is discussed beyond this.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 2. What are the broader impacts of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. The tools would allow brain scientists to explore high-dimensional data, and the tool would also predict user performance and state. The proposal is inspired by principles from cognitive science, neuroscience and HCI. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. The proposal does not provide a leadership or collaboration plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Review #5&lt;br /&gt;
:&#039;&#039;Rating: Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PROPOSAL OBJECTIVES AND APPROACH&lt;br /&gt;
:&#039;&#039;The proposal develops a variety of tools for interactive analysis and reasoning for brain scientists.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT&lt;br /&gt;
:&#039;&#039;The project addresses research in three areas: human-computer interaction, cognitive modeling and the connectivity in the brain. It lists11 items that are to be developed.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;The team is fine.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;There is a great need for the tools by the computational neuroscience and cognitive science community. This project will develop some of these tools.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal can be strengthened by focusing and making the challenges and ideas more clear.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Review #6&lt;br /&gt;
:&#039;&#039;Rating: Poor&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity of developing and improving neuroscience visual analysis tools is very important. Currently access to the genre of systems described in this proposal, especially in areas with complex sub-structure such as neuroscience, is lacking and the proposed activity could have a profound effect on the state-of-the-art in the field. The possible interplay between the user-interface experts and biomedical informatics developers is a possible strength. The available team and resources are very strong and capable with an excellent track record in the field.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;I very much liked the proposed idea of using the system directly in courses taught at Brown and other collaborating institutions. I think that this is a sizable innovation that would be very welcome in the field and might even form the basis of evaluation metrics for the success of the system (which could address one of my previous criticisms of the project).&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity?&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The activity is well integrated with training and learning. There are a number of students in the group, all of whom would have the opportunity to work with professionals from a very different focus. The interdisciplinary nature of the work coupled with the need for analysis tools in biology would be an excellent synergy to cultivate.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Although the high-level conceptualization of this project is exciting, the way that the project was described in the proposal was massively underspecified. Technical details were lacking and some fundamental aspects of the project&#039;s conceptualization in terms of the scientific domain under study were missing. There was no timetable, and no evaluation proposed to see how progress would be measured. The authors should be careful about making high-level claims concerning the possible impact of the proposed without a more carefully constructed argument to back up the claims.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Unused Parts from Proposal==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT (INCLUDING POTENTIAL TRANSFORMATIVE ASPECTS): &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel noted that the proposal was clear and comprehensive, and addressed the criteria for SI2 proposals as well as the general NSF criteria. The proposed software would enhance both visualization of data on brain function and the knowledge discovery process of researchers in this area. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;BROADER IMPACTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel believes that the framework of this project would be portable to such other fields as gene regulation and the analysis of other complex networks. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;==========================&lt;br /&gt;
&lt;br /&gt;
===Panel Summary===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;ADDITIONAL REVIEW CRITERIA: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed software would primarily be of use to brain researchers. Other fields would be impacted indirectly, in the sense that if this way of building software packages combining visualization with support for hypothesis testing and tracking of analyses succeeds, it might provide a pattern for those fields to follow in their own software development processes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;PANEL RECOMMENDATION (CHECK ONE): &lt;br /&gt;
:&#039;&#039;[X] Competitive (C) &lt;br /&gt;
:&#039;&#039;[ ] Not Competitive (NC) &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;This panel summary was read by panelists who participated in the discussion of this proposal, and they concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Competitive&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;SYNTHESIS COMMENTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel agreed that this was a highly competitive proposal. In particular, the proposal attacks a large but tractable problem, and the team&#039;s wide and deep expertise gives the project a high probability of success. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #1===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SSI proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A five year project to develop software for visualization and analysis of brain circuitry by working with brain researchers to analyze their cognitive processes that need to be supported. The software is to help link the visualization workflow to a &amp;quot;decisional&amp;quot; workflow, supporting &amp;quot;reasoning and analysis at a high level, rather than just displaying data.&amp;quot; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The development phase will include studies of user interaction at both low level (eye tracking, mouse click logs) and high level (decision making). &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The PI&#039;s of this project are proposing to develop, test, and deploy software tools for scientific study of brain circuits. The project will focus on building a cyber infrastructure software system that is intended to improve the speed at which those doing brain research are able to complete their data analysis and it will advance the understanding of human cognition. This project has potential to have impact in the way researchers in the field collect and analyze data by providing an a rich set of cyber infrastructure tools for use in studying and modeling brain circuits. The intellectual merit of the project is very high as it has the potential to greatly reduce the time required to collect and analyze data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project will provide an enhanced set of software tools to researchers in several areas that are conducting research that is related to the human brain. Areas of research that the cyber infrastructure software can be used in included: gene regulation, protein signaling and even crime and terrorism analysis and all have the potential to benefit. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;While this work is mostly outside my domain of expertise, it looks like a well chosen problem and an interdisciplinary effort. In particular, the team appears to have the broad range of people they would need to pull this off, from experts from the target audience (who suggested the project, and is listed as a co-PI), to cognitive scientists and computer scientists. In addition to the Intellectual and Broader Impact criteria, the specific &amp;quot;additional criteria&amp;quot; listed in the Program Solicitation have been explicitly and (to the extent that I can tell) well addressed. The required supplemental documents address the required points as well. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity will allow brain scientists to visualize brain functions more easily and in much more detail than in the past.Network models, coupled with sophisticated methods for dimensionality reduction, promise to offer unique insights into the workings of the human brain. Moreover, the proposed visualization tool will go through rigorous evaluation that will allow its constant improvement. The proposers form a very strong group of well-established researchers in brain science and data visualization, offering a unique collaboration. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Researchers from other disciplines, e.g., who study gene regulation, protein signaling, or perform crime and terrorism analysis, etc., have the potential to be benefited by the proposed software. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The goal of this proposal is to develop, test, and deploy interactive visualization tools for scientific study of brain circuits. The tools will help brain researchers view brain circuits at multiple scales and perform sophisticated analysis of research hypotheses. The team members have a decade of experience developing scientific visualization tools for scientific users and consist of experts in cognitive science, neuroscience, computer science, and visual design. They are well qualified to conduct the project.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #2===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project is focused on an area of research that spans several disciplines that will be able to utilize the cyber infrastructure software that will be developed. The PI&#039;s have a proven record of accomplishment in prior research projects. The project has intellectual merit and will have a broad impact by providing an enhanced set of software tools that will facilitate the efforts of researchers doing work related to studying and modeling the brain.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #3===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This unique collaboration between brain scientists and data visualization scientists promises to offer tremendous benefits to the scientific community. The software that will be developed, will allow researchers to understand the signal pathways in the human brain in more detail than ever before. The software will be analyzed through a rigorous process, using models of cognition.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #4===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is an interdisciplinary project and the target user community is brain scientists. The tools will be made available to the public and are expected to benefit the entire brain science research community as well as other disciplines studying linked types of data. The tools can also be used in classes to help students understand connectivity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I can see the proposed work will be valuable for brain scientists to study connectivity and dynamics of neural circuits in intact brain as existing systems all have limitations and cannot satisfy the needs of brain scientists as discussed in the proposal. Other scientific domains may need similar tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal focuses on an interesting problem for which it also provides a novel solution. Therefore, I think this is a quality proposal and worthy of support.&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=User:Nathan_Malkin&amp;diff=5477</id>
		<title>User:Nathan Malkin</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=User:Nathan_Malkin&amp;diff=5477"/>
		<updated>2011-10-04T14:26:52Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Class of &#039;13.&lt;br /&gt;
&lt;br /&gt;
[mailto:nmalkin@cs.brown.edu nmalkin@cs.brown.edu]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5476</id>
		<title>CS295J/Review Response</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5476"/>
		<updated>2011-10-04T14:18:30Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Full Proposal (SI2-SSI) Review and Response ==&lt;br /&gt;
&lt;br /&gt;
Dear reviewers&lt;br /&gt;
&lt;br /&gt;
Thank you so much for your insightful, constructive, and helpful comments.  We particularly appreciated the positive evaluation of the proposal and the potential of our methods for enhancing visual data analysis with regards to brain functioning.  &lt;br /&gt;
&lt;br /&gt;
We are overwhelmingly grateful to have received a &#039;competitive&#039; summary evalution.&lt;br /&gt;
&lt;br /&gt;
We have noted and addressed the criticisms below in the context of the reviews received.  Our responses and details concerning our revisions are in bold.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
&lt;br /&gt;
David Laidlaw et al.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;1.&#039;&#039;&#039; &#039;&#039;The panel&#039;s discussion focused on the sustainability issue beyond the end of the project support timeframe. While a section of the proposal talks about the Outreach, Education, and Sustainability Plan, the community outreach and sustainability aspects are treated somewhat cursorily. These two issues are closely related: without community support, the software is unlikely to be sustainable in the long run. On the other hand, the proposers appear to be well known in their field, which may enhance community uptake. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As added in section G, the plan to create long-term sustainability now includes more detail. Specifically we will focus on creating a lasting open source community that provides frequent updates to the code centralized by the core research team following a Macro R&amp;amp;D development infrastructure.&#039;&#039;&#039; ([[User: Stephen Brawner | Stephen Brawner]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;2.&#039;&#039;&#039; &#039;&#039;The proposal was perhaps over-ambitious; aspects of the evaluation (for example, eye tracking) will create large amounts of data that will require correspondingly intensive data analysis. However, the panel felt that even if the project did not accomplish every detail of the proposal, it would still be highly worthwhile. Similarly, while some aspects of the project might be seen as risky, a certain amount of risk is acceptable in NSF proposals, or even expected. Moreover, any risk is mitigated by the qualifications of the PIs, as exemplified by their excellent track record. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;The proposers have laid out a detailed five year plan. One panelist questioned whether sufficient attention had been paid to issues of sustainability, in particular there was no mention made of plans for software support beyond the end of the five year plan. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1&#039;&#039;&#039; ([[User: Stephen Brawner|Stephen]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;4.&#039;&#039;&#039; &#039;&#039;The proposers see this as sort of a prototype of the way software could be developed to support other scientific endeavors; this effect appears to constitute most of the Broader Impact (I wouldn&#039;t count benefit to &amp;quot;the entire brain science research community&amp;quot;, mentioned in the BI statement, as broader impact). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;5.&#039;&#039;&#039; &#039;&#039;One quibble I have in the Management and Coordination Plan is the statement that &amp;quot;The Stanford researchers will visit Brown if face-to-face interactions become necessary.&amp;quot; While electronic communication allows collaboration in ways that would not have been possible before, I think it&#039;s not a question of whether face-to-face will be necessary, but how often. Fortunately, this appears to have been built into the travel budget.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As noted by the reviewer, face to face communication will be necessary and not an &#039;if&#039; but a &#039;when&#039;. Researchers at Brown will plan to coordinate visits in California and vice-versa.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;6.&#039;&#039;&#039; &#039;&#039;As the proposal is for SI2-SSI, I would like to know more on what the team plans to do to ensure the sustainability of the software and develop open-source community support. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;7.&#039;&#039;&#039; &#039;&#039;It is also unclear what the team will do to integrate diversity into the proposed activity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-proposal (Expeditions) Reviews and Response ==&lt;br /&gt;
:&#039;&#039;Proposal Number: 1064261&lt;br /&gt;
&lt;br /&gt;
===Panel Summary for Expeditions Preliminary Proposals===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Preliminary Proposal Summary (Vision/Goals of the Expedition) &lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnected data such as crime and terrorism analysis. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit &lt;br /&gt;
:&#039;&#039;The authors of this proposal are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception.&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;1. &#039;&#039;&#039;However the proposal does not elaborate on how the PIs would use heuristic knowledge of artists and visual designers; this is only mentioned in the introduction but never developed in the proposal. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have added a section called &amp;quot;Evaluating Predictive Models&amp;quot; that outlines performance metrics the system will predict, e.g., task completion time, number of insights (see Saraiya et al., 2005), and methods for predicting user state and goals from observed user interactions and eye-tracking.  We plan to collect and compare heuristic design knowledge with predictive models of visual-search behavior using eye-tracking; one expected contribution will be validating or revising existing design guidelines.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Overall the panel had difficulty coming to a common understanding of the proposal contents. The range of reviews mirrored the range of what people had read into the proposal and their enthusiasm for the proposal topic. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;That said, the proposal failed to convince the reviewers along a number of dimensions. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;2. &#039;&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. Utilizing cognitive principles to inform this research is applauded, but we expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance.&lt;br /&gt;
&#039;&#039;&#039;We have added a section called &amp;quot;Task Analysis&amp;quot; that outlines how we will analyze and encode tasks and sub-tasks that prime or conflict them.  In a subsection titled &amp;quot;Applications&amp;quot;, we outline the design of a software module that facilitates analyst &#039;goal maintenance&#039; by collecting user input, predicting goals from this, and displaying suggestions for similar or priming sub-goals.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ] &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; &amp;quot;We have included a plan for implementing cognitive principles. Strategies (through eye-tracking and other means outlined above) will include monitoring users&#039; preferences and cognitive strategies through reaction time data and verbal reports from users. Historically, such strategies have proved successful when gathering user information, input, and comprehension; we expect the same in our implementation. We also intend to use eye-tracking and reaction time procedures in order to best gauge cognitive load to the extent that it affects user experience, as we strive to understand user adaptation and navigation strategies within our applications.&amp;quot;&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;3. &#039;&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how performance is going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;4. &#039;&#039;&#039;The proposal needs to be much more explicit about the techniques used to derive predictions including the track records of these techniques and the ways in which these techniques may need to be enhanced to be used in particular application domains. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;5. &#039;&#039;&#039;Another concern is methodological. We suggest that the team identify benchmark tasks that would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Original: We have identified a set of benchmark tasks to evaluate the users’ cognitive skills when working with our interface. These tasks test how well the interface increases users’ performance in the following cognitive aspects: multi-tasking, task-switching, visual search, reasoning, decision-making and cognitive workload. More detailed description of the design of these tasks, including experiment procedure outline, involved equipments and techniques, and data analysis methods, can be found in section e.5 (Formal Testing).&#039;&#039;&#039; ([[User: Hua Guo|Hua]], Sep.22, 2011)&lt;br /&gt;
&#039;&#039;&#039;Revised: We will break down the brain circuitry exploration process into a set of simpler cognitive tasks through cognitive task analysis. Benchmark tests that involve solving simple problems related to brain circuitry analysis will then be designed based on the identified cognitive tasks. These benchmark tests can then be used to evaluate the users&#039; cognitive performance when working with our system in comparison with existing systems. ([[User: Hua Guo|Hua]], Sep.27, 2011)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; As the brain exploration process may involve essential complex tasks that are not able to break down into simpler tasks, we choose to evaluate the tool qualitatively over a long-term period: observe and interview users regarding their experience with the tools in order to discover critical features that the existing tool is missing. The researcher will also provide technical support if necessary. ([[User:Diem Tran|Diem Tran]] 21:22, 28 September 2011 (EDT))&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impacts &lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;6. &#039;&#039;&#039;The panel thought that the proposal team could do more work in considering possibilities for broader societal impact and for designing more impactful outreach and education activities that would extend beyond the Brown community. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have highlighted some of the broader implications for our project, such as making tools publicly available for other communities of brain scientists, medical researchers, and even instructors to learn about brain connectivity through our visualization initiatives. We have also outlined plans to develop tools for elementary, middle, and high school students as they explore the brain, its anatomy, and its capabilities.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Rationale for the Recommendation &lt;br /&gt;
:&#039;&#039;Despite enthusiasm for this topic and the potential for significant impact if successful, the panel could not support the pre-proposal at this time due to a lack of coherent vision and a realizable plan of implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Put an X next to the appropriate category &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Invite &lt;br /&gt;
:&#039;&#039;Invite-if-possible &lt;br /&gt;
:&#039;&#039;Do-Not-Invite	 X &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This summary was read by/to the panel, and the panel concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Do Not Invite&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #1===&lt;br /&gt;
:&#039;&#039;Rating: &#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnect data such as crime and terrorism analysis.	The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;The leadership team of distinguished faculty and researchers is well qualified to conduct this research. The entire team is an appropriate blend of researchers representing all of the subordinate areas of research. &lt;br /&gt;
:&#039;&#039;The quality of prior work from all participating members is uniformly outstanding and appropriate for this endeavor. &lt;br /&gt;
:&#039;&#039;While there have been much work of a similar nature done in the past, this project would move our knowledge forward on a number of new fronts. In addition, the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception is novel. &lt;br /&gt;
:&#039;&#039;The proposal is well conceived and organized and clearly presented. &lt;br /&gt;
:&#039;&#039;With the pupil tracking device requested in the budget, there appears to be sufficient access to all the required resources necessary for this undertaking. &lt;br /&gt;
:&#039;&#039;There seems to be a wealth of available experimental facilities in the associated institutions. Some of the relevant equipment includes tiled display walls, stereo-enabled desktop displays, ultra-high-resolution Wheatstone stereoscope, haptic devices, and a virtual-reality cave to come online later this year.	&lt;br /&gt;
:&#039;&#039;The leadership plan appears to be appropriate for the small size of the project personnel. The team members have a good history of interaction on related academic activities. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;7. &#039;&#039;&#039;Other than the nominal support for traditional computing need by the computer science department at Brown, there did not appear to be any specific institutional support for this proposal. &lt;br /&gt;
:&#039;&#039;The budget is well thought out, clearly described, and justified. &lt;br /&gt;
:&#039;&#039;The collaboration amongst the faculty of Brown, Stanford, and the Rhode Island School of Design appears to be very appropriate for this project. The project would bring together cognitive scientists, visualization experts, and other domain specialists to bridge the gap between theory and practice in this area of brain research. Clearly the synergy in this group would help to insure the success of this work. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;8. &#039;&#039;&#039;The value of the proposed work appears to be very important in this specific area of research. While this is not my specific area of expertise, I have some concern whether this project meets the over arching goals of the Expeditions in Computing Initiative. In particular, it would seem to have the potential to stimulate interest in this area for graduate students in related disciplines, but may not be very successful in drawing attention to STEM studies amongst the K-12 age group. &lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 6.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
:&#039;&#039;A major focus of this proposal is the conceptualization, design, and development of a software framework for predicting user performance. It would gather information on specific models, user interfaces, and user goals and endeavor to produce probabilistic estimates of the state of users over time as predicted by the models. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;9. &#039;&#039;&#039;I did not see anything in the proposal that indicated that it would be of particular interest to youth and underrepresented groups. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;10. &#039;&#039;&#039;The single sentence on stimulating effective knowledge transfer did not seem convincing. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a good general research proposal in the area of brain research and the understanding of interconnected relationships of complex data sets. There are some novel research concepts, including the heuristic knowledge of artists and visual designers related to cognition and perception. The leadership team is well qualified to lead this endeavor and the outlook for good research results look promising. The proposed budget is in line with the proposed activities and personnel commitments. This proposal should fair well as a general unsolicited proposal for NSF. I would rank this proposal to be better than many of the other proposals of the Expeditions in Computing Initiative.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #2===&lt;br /&gt;
:&#039;&#039;Rating: Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit: The main goal is to push the frontiers of data visualization, with secondary thrusts on improved learning in cognition about how we look at data. The strengths of the proposal are that the proposed tool is, as far as I know, ground-breaking in that it will actively change based on the user. Further, the researchers are well qualified, with expertise in the psychology as well as in the computer science. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;11. &#039;&#039;&#039;The major weakness was that I was not really clear how the software tool would eventually work. For instance, they talked a little about following pupils of the user - but I was not clear how they would capitalize off of that knowledge. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;As mentioned in the response to the introduction of the pre-proposal, we have expanded out plan about how we will incorporate pupil data from eye-tracking into creating an interface that was best optimized for data analysis ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Value added: &lt;br /&gt;
:&#039;&#039;I think the proposed research could be complex and important enough to warrant this investment - however I could not find a clearly defined path of attack in the proposal. It fits well within the 3 program goals, with probably the greatest emphasis on the first. Intelligent data visualization tools that can react to the user will open many new doors in understanding science. It will impact and inspire future computer scientists, although I do no see a preference for underrepresented groups. Finally, it has the potential to stipulate new significant findings in science and in education. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership plan: &lt;br /&gt;
:&#039;&#039;The leadership plan seemed well thought out. They have a diverse set of researchers, each with their unique skill set. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impact: The work will be distributed to all who want to use it and will be used in classes at Brown, affecting students at all levels (either as developers or clients). I found their vision here a little short-sighted. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In their proposal entitled &#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools,&#039; the authors propose to use understandings from cognitive science, neuroscience and human - computer interaction to develop better tools for examining data. In particular, they will develop software to that will visualize neural connections in the brain. At the same time they will actively measure the client and use these data to predict what the client will want to see next. They present a compelling case that we need better visualization tools for understanding the brain.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #3===&lt;br /&gt;
:&#039;&#039;Rating: &#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The dominant approach to developing interactive systems is for the developers to interact with the envisioned users to gather the general requirements for the application and to construct software based on the developer&#039;s intuitions as to how the users will actually interact with the software. Depending on the sophistication of the organization, cycles of usability testing and re-design are used to refine the interface; alternately, they may simply release the software to the users and wait for the complaints or lack of sales. &lt;br /&gt;
:&#039;&#039;An alternative to this expensive process is to construct explicit models of the perceptual and decision making processes of the users and then use these models to inform the design process. Work on cognitive models such as GOMS and ACT began about three decades ago and has progressed slowly but steadily throughout the time period and there have been a number of small demonstrations that such models can, in fact, eliminate most or all of the iteration previously required. &lt;br /&gt;
:&#039;&#039;The authors of this are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing of neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. &lt;br /&gt;
:&#039;&#039;Although the focus of the recent work in cognitive models has been to develop engineering models which are capable of being used outside of the research setting, use of such models in the design of interactive systems has been slow to catch on. If nothing else, construction of the models requires a large amount of intellectual labor and, to date, impressive examples of the use of these models to justify that labor investment have been rare. This work has the potential for providing such a critical example and could be the impetus to finally move cognitive models into widespread use. &lt;br /&gt;
:&#039;&#039;Additionally, work in as complex an area as brain science will ensure that the cognitive modeling tools can handle nearly any application. &lt;br /&gt;
:&#039;&#039;Finally, if the models do result in improved tools, the research may result in new findings in the brain science field. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;12. &#039;&#039;&#039;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process. There is no guarantee that the computer scientists and psychologists doing this research will be able understand and model the cognitive processes of a brain scientist. &lt;br /&gt;
&#039;&#039;&#039;We have included our experience of developing visualization tools for brain study at Brown, and also we have a detailed timeline, in each phase, brain scientists will be involved in the process of model evaluation, testing, and providing feedback.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Value-added of funding the activity as an Expedition &lt;br /&gt;
:&#039;&#039;This work requires substantial commitment on the part of the computer scientists and psychologists to learn the brain science domain and on the part of the brain scientists for their interaction with the cognitive scientists. Such a commitment is unlikely to be obtained with smaller, more fragmented funding. Industry or venture capital are unlikely to fund this kind of research. &lt;br /&gt;
:&#039;&#039;The main knowledge transfer methods will be the mentoring of graduate students and the addition of formal courses intended to teach about interdisciplinary collaboration. In addition, the software they develop will be made available for distribution. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;13. &#039;&#039;&#039;Except through the rather limited vehicle of scholarly publication, it is not clear how the cognitive models themselves are to be made available. The authors may want to consider using their own visualization capabilities to explain the models. &lt;br /&gt;
&#039;&#039;&#039;On page x, section y, we have included preliminary visualizations, which are examples of the visualizations we will implement to explain the cognitive models we intend to incorporate in our research.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;Only two institutions are involved in this work and the senior researchers are all located at one of the two institutions. This should minimize coordination problems. &lt;br /&gt;
:&#039;&#039;Funding for the primary brain scientist in this research is at the 50% level and his supervisor has a nominal level of funding. This is a cause for concern, given the level of commitment required to support what is, essentially, someone else&#039;s area of research. A higher level of funding would be desirable, even if a substantial amount of the funded time is spent on pure brain science research. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal includes training a number of graduate and post doctoral students; in fact, most of the funding requested is for student support. &lt;br /&gt;
:&#039;&#039;As mentioned earlier, to the extent this work results in wider acceptance and usage of cognitive models, particularly in the development of scientific software, it will accelerate the construction of interactive systems which can be used efficiently. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This work should lead signficantly wider use of cognitive modeling in interactive systems design as well as provide researchers in brain science with superior tools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #4===&lt;br /&gt;
:&#039;&#039;Rating: &#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal seeks to develop new visualization techniques that will assist brain scientists with the interpretation of high-dimensional data. For this purpose, the PI will incorporate design principles and knowledge from cognitive science, neuroscience and human computer interaction. the visualization system will also capture data of scientists as they use the tool, and compare it with computational models from cognition, perception and art. The tool will also be able to predict user performance and user state over time. The tool will be released through an open-source license, and will be incorporated into two courses. The team has worked together for a number of years. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 1. What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The kind of tool envisioned in this proposal would be invaluable, not only in neuroinformatics but also on other disciplines that deal with high-dimensional, multi-scale data, from social networks to geospatial information. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Despite its laudable objective, this work is not ready for further scrutiny as a full proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;14. &#039;&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. How are these principles going to be used to design a better visualization, and how are they going to be tested? I very much like the idea of using cognitive principles, but would have expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance: &amp;quot;We will use these principles to determine which tasks to make easily accessible to users and which to put in the background.&amp;quot; This is a general problem for interface design, not a solution to the problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Reviewer 4 indicated that the proposal was not explicit about how cognitive principles would be implemented in the project. In section ___ on page __, we have elucidated how the principles listed will concretely lead to better, more optimized visualizations, as well as the studies we plan to perform to show that the visualizations are indeed efficient. ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;15. &#039;&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. This is a wild extrapolation. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how is performance going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have made a detailed timeline, and in phase/year 3, we have included the way to measure user performance and evaluate the cognitive models. We evaluate the models prediction against actual user data, in addition to eye-tracking, computer interaction logging, video logging and skin conductance response will be adopted, and the results and help refine the models.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;16. &#039;&#039;&#039;Another concern is methodological. Say that the proposal had articulated a clear plan and a reasonable set of deliverables for a new generation of visualization interfaces. Wouldn&#039;t it be better to test this interface on some benchmark problems, and see how it facilitates performance relative to a standard interface? These benchmark problems would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We mainly test the user interface performance by empirical ways. But they are based on the principles of perception and goal selection .  We also have included some standards like performance time of an average user completing a unit task . We will include a comparison between the past and present interface in one unit task performance test.  ( --- [[User:Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
:&#039;&#039;To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is very ambitious in its overall objectives. A visualization tool having the characteristics suggested in the proposal would be invaluable to brain science as well as to other scientific disciplines dealing with high-dimensional complex data, such as genomics/proteomics, geospatial analysis, network analysis, etc. However, the proposal fails to turn a high-level concept into a realizable implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Our framework allows more modules to be involved in as the system evolves in future.  The interface facilitating viewing data and the model to make prediction can be both applied to other disciplines .( --- [[User:Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
:&#039;&#039;Is the work of sufficient import, scale, and/or complexity to justify this type of investment? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The brain is one of the scientific frontiers for the 21st century. The proposal has the complexity and scale worthy of this type of investment, but the proposal fails to deliver a realistic plan (if any plan at all) or even specifications. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Will the work contribute to realization of the EIC program goals and is it likely to demonstrate completion of these goals? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Understanding the brain is one of our greatest scientific challenges. Unfortunately, without a clear research plan it is difficult to asses the likelihood that the proposal will be able to demonstrate completion of its overall goals. &lt;br /&gt;
&#039;&#039;&#039; We have added a new section named &amp;quot;Detailed Plan&amp;quot; which describes in details our proposed plan throughout the project duration. We divide the plan into milestones, allocate manpower and budget, and provide tentative schedule for each milestone. [ [[User:Diem Tran|Diem Tran]] ] &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Value of the experimental systems or shared experimental facilities proposed &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators will utilize some shared facilities in their research and will share software and data that they produce to allow further research by others. The proposed software testbed will be used across the collaborators to test models of cognition and perception in the context of HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators have worked together for a number of years, have taught classes together, and their students have attended classes from each other. No leadership or collaboration plan is discussed beyond this. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 2. What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;17. &#039;&#039;&#039;Societal benefits of this tool would derive from its scientific merit to the extent that it would help understand the brain. Given the characteristics of this project, I wonder if other NSF funding opportunities would be more suitable, such as the FODAVA program or the interdisciplinary program in neuroscience at CISE. I also wonder whether this work should be funded instead by NIH (NIBIB, NIMH). The budget contains a request for $3,000 to cover costs of animal (mouse) care; why is this needed given that the proposal is for software development? &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;While the scientific merit of this project is most directly applicable to understanding the brain, it should be noted that the goal to “convincingly demonstrate that the employed techniques facilitate better analysis” (described in section c.1) is unique to this proposal, and would have impacts that reached into any other discipline in which vast amounts of data needs to be interpreted. [ [[User:Michael Spector|Michael Spector]] ]&lt;br /&gt;
&#039;&#039;&#039;In addition, it is our hope that much of the development that we do in creating an interface and visualizations that adapt to the user needs will be easily implemented into a much wider range of programs not only across disciplines, but also educational levels, allowing students to find a way to learn and explore that best fits their learning style and needs. [ [[User:Jenna Zeigen|Jenna Zeigen]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. The tools would allow brain scientists to explore high-dimensional data, and the tool would also predict user performance and state. The proposal is inspired by principles from cognitive science, neuroscience and HCI. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. The proposal does not provide a leadership or collaboration plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #5===&lt;br /&gt;
:&#039;&#039;Rating: Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PROPOSAL OBJECTIVES AND APPROACH &lt;br /&gt;
:&#039;&#039;The proposal develops a variety of tools for interactive analysis and reasoning for brain scientists. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT &lt;br /&gt;
:&#039;&#039;The project addresses research in three areas: human-computer interaction, cognitive modeling and the connectivity in the brain. It lists11 items that are to be developed. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;18. &#039;&#039;&#039;The proposal is vague and has a lot of repeatability. It is not well written. It is not clear what research experiments are performed. &lt;br /&gt;
&#039;&#039;&#039; As described in the proposal, we apply cognition and visualization principles to develop a brain circuits and connectivity visualization tool. We evaluate our tool using methodologies established in cognitive science, psychology and human computer interaction research. Detailed explanations of our approach are addressed in comments to review 1, 2, 5, 14, 16. [ [[User:Diem Tran | Diem Tran]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The team is fine. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;There is a great need for the tools by the computational neuroscience and cognitive science community. This project will develop some of these tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;19. &#039;&#039;&#039;The details of the educational plan are not given. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As noted on page 26 (section g, &amp;quot;Outreach, Education, and Sustainability Plan&amp;quot;), we plan to educate users on the use of our developed tool through the same channels as our outreach efforts; namely, to the visualization, interaction, and brain science communities. We also note that we will deploy our tool in academic settings, such as labs and classrooms in the brain sciences, which will benefit those groups from an educational point of view, as well as provide another avenue through which we can receive feedback on our software. [ [[User:Michael Spector|Michael Spector]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal can be strengthened by focusing and making the challenges and ideas more clear.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #6===&lt;br /&gt;
:&#039;&#039;Rating: Poor&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity of developing and improving neuroscience visual analysis tools is very important. Currently access to the genre of systems described in this proposal, especially in areas with complex sub-structure such as neuroscience, is lacking and the proposed activity could have a profound effect on the state-of-the-art in the field. The possible interplay between the user-interface experts and biomedical informatics developers is a possible strength. The available team and resources are very strong and capable with an excellent track record in the field. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;20. &#039;&#039;&#039;However, the proposed activities within the proposed are completely underspecified. Given the scale of the project and the challenging nature of the problems to be faced, the level of technical detail and planning presented in the proposal is insufficient and unconvincing. A great many claims are made in the proposal with no clear measurable end-points to determine how success within the project could be evaluated. Specifically, the authors claims to employ developments in concepts of cognition and perception to assist scientists reasoning. What aspects of reasoning? For which tasks? Within which discipline? If this is only related to tractography, what sort of scientific hypotheses to the applicants expect to address? Are they relating these representations of neural connectivity to studies in animals? Are they relating these analyses to other modalities of imaging data? How do they intend to reason over the complex semantics of these other experimental types? These are all glaring omissions from the proposal. The description of the cognitive science aspects of the project were marginally better specified, but I still found the details lacking. For example, the claim was made that &#039;our system will tune itself to individual work styles. How? What technical elements will the system exploit to accomplish this? In particular, the applicants must spend more time specifying the precise tasks that the system is designed to tackle before it is possible to improve or optimize performance at that task. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Response: We have added a section called &amp;quot;Task Analysis,&amp;quot; in which we outline a plan for specifying visual analysis tasks through user observation. We will begin with loosely structured observations to generate hypotheses about primary tasks and user strategies in this domain, and continue with cognitive task analysis methods to focus and test these hypotheses. We present preliminary results that show the viability of this plan.  ([[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 10:48, 22 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;21. &#039;&#039;&#039;In particular, the statement &amp;quot;We expect the core element to evolve significantly through the five years of the project. It cannot be meaningfully defined without the data we will acquire from users, so details beyond this overview are not possible yet&amp;quot; is incredibly revealing and suggests that the applicants themselves do not have a clear idea of how they intend to solve these problems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have included a diagram to clarify the software architecture described in the proposal.  We implemented a simple version of the architecture as a proof of concept, as described in the section &amp;quot;Preliminary Work&amp;quot;.  Individual modules will be developed and adjusted with an iterative testing schedule.&#039;&#039;&#039;  [It would also be good to add, &amp;quot;This is similar to another problem/tool that worked successfully in this cited paper&amp;quot;, but I haven&#039;t found a good example yet.] [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;22. &#039;&#039;&#039;The Figures presented in the proposal looked confusing and uninformative, adding nothing to the argument that these systems would actually help a scientist understand the underlying data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have incorporated a new set of figures into the proposal, which are more focused on the concept and design of the proposed system; in particular, we have included some figures to demonstrate the prototype design for different views of the system. [ [[User:Hua Guo|Hua]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;I very much liked the proposed idea of using the system directly in courses taught at Brown and other collaborating institutions. I think that this is a sizable innovation that would be very welcome in the field and might even form the basis of evaluation metrics for the success of the system (which could address one of my previous criticisms of the project). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The activity is well integrated with training and learning. There are a number of students in the group, all of whom would have the opportunity to work with professionals from a very different focus. The interdisciplinary nature of the work coupled with the need for analysis tools in biology would be an excellent synergy to cultivate. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;23. &#039;&#039;&#039;The presence of high-end graphics equipment (such as a virtual-reality cave and haptic displays, etc.) is a plus for the project but also is a hindrance to enable the developers to release their work to a broader audience. If the system is only available to the small number of people who have access to such facilities, then the impact of the work would be lessened.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have expanded our dissemination plan to include multiple distributions for different end-user workstations.  Modules that require high-end hardware for online data capture (e.g., eye-tracking) or display will be disabled in the basic distributions for Windows and Mac.  Our research agenda includes experiments with this equipment -- for instance, to validate cognitive models or design guidelines -- but it is not necessary for most end users who download releases of the tool.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;24. &#039;&#039;&#039;The proposed activity makes no specific claims to target or support underrepresented groups explicitly. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;25. &#039;&#039;&#039;The underspecification of the technical aspects of the project undermine the open-source distribution of the code. It is technically demanding to generate usable open source products for other people to use. Notably, browsing the co-PIs webpages, there were no easily accessible open-source software products noticeably available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;- Since the review, the PI&#039;s lab has published a large selection of source code from previous projects, available from its website (vis.cs.brown.edu), with provisions for multiple platforms. &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;- We are undertaking a project to make anonymized images of healthy and abnormal brains (and related data), from our projects and those of a number of collaborators worldwide, freely available to the scientific community. &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;- Development will be done using a publicly visible source code repository (e.g., Github), so that other scientists and the public may be able to track progress, comment on changes, provide feedback, and even contribute pieces of code to the software. &amp;lt;br /&amp;gt;&lt;br /&gt;
[ [[User:Nathan Malkin|Nathan]] ]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Although the high-level conceptualization of this project is exciting, the way that the project was described in the proposal was massively underspecified. Technical details were lacking and some fundamental aspects of the project&#039;s conceptualization in terms of the scientific domain under study were missing. There was no timetable, and no evaluation proposed to see how progress would be measured. The authors should be careful about making high-level claims concerning the possible impact of the proposed without a more carefully constructed argument to back up the claims.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Removed Chunks of the Proposal (not included in the response letter)===&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;09/10/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This Proposal has been Electronically Signed by the Authorized Organizational Representative (AOR).&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Division of Computer and Communication Foundations&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Mitra Basu&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8649&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mbasu@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
&lt;br /&gt;
==Unused Parts from Proposal==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT (INCLUDING POTENTIAL TRANSFORMATIVE ASPECTS): &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel noted that the proposal was clear and comprehensive, and addressed the criteria for SI2 proposals as well as the general NSF criteria. The proposed software would enhance both visualization of data on brain function and the knowledge discovery process of researchers in this area. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;BROADER IMPACTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel believes that the framework of this project would be portable to such other fields as gene regulation and the analysis of other complex networks. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;==========================&lt;br /&gt;
&lt;br /&gt;
===Panel Summary===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;ADDITIONAL REVIEW CRITERIA: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed software would primarily be of use to brain researchers. Other fields would be impacted indirectly, in the sense that if this way of building software packages combining visualization with support for hypothesis testing and tracking of analyses succeeds, it might provide a pattern for those fields to follow in their own software development processes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;PANEL RECOMMENDATION (CHECK ONE): &lt;br /&gt;
:&#039;&#039;[X] Competitive (C) &lt;br /&gt;
:&#039;&#039;[ ] Not Competitive (NC) &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;This panel summary was read by panelists who participated in the discussion of this proposal, and they concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Competitive&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;SYNTHESIS COMMENTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel agreed that this was a highly competitive proposal. In particular, the proposal attacks a large but tractable problem, and the team&#039;s wide and deep expertise gives the project a high probability of success. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #1===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SSI proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A five year project to develop software for visualization and analysis of brain circuitry by working with brain researchers to analyze their cognitive processes that need to be supported. The software is to help link the visualization workflow to a &amp;quot;decisional&amp;quot; workflow, supporting &amp;quot;reasoning and analysis at a high level, rather than just displaying data.&amp;quot; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The development phase will include studies of user interaction at both low level (eye tracking, mouse click logs) and high level (decision making). &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The PI&#039;s of this project are proposing to develop, test, and deploy software tools for scientific study of brain circuits. The project will focus on building a cyber infrastructure software system that is intended to improve the speed at which those doing brain research are able to complete their data analysis and it will advance the understanding of human cognition. This project has potential to have impact in the way researchers in the field collect and analyze data by providing an a rich set of cyber infrastructure tools for use in studying and modeling brain circuits. The intellectual merit of the project is very high as it has the potential to greatly reduce the time required to collect and analyze data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project will provide an enhanced set of software tools to researchers in several areas that are conducting research that is related to the human brain. Areas of research that the cyber infrastructure software can be used in included: gene regulation, protein signaling and even crime and terrorism analysis and all have the potential to benefit. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;While this work is mostly outside my domain of expertise, it looks like a well chosen problem and an interdisciplinary effort. In particular, the team appears to have the broad range of people they would need to pull this off, from experts from the target audience (who suggested the project, and is listed as a co-PI), to cognitive scientists and computer scientists. In addition to the Intellectual and Broader Impact criteria, the specific &amp;quot;additional criteria&amp;quot; listed in the Program Solicitation have been explicitly and (to the extent that I can tell) well addressed. The required supplemental documents address the required points as well. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity will allow brain scientists to visualize brain functions more easily and in much more detail than in the past.Network models, coupled with sophisticated methods for dimensionality reduction, promise to offer unique insights into the workings of the human brain. Moreover, the proposed visualization tool will go through rigorous evaluation that will allow its constant improvement. The proposers form a very strong group of well-established researchers in brain science and data visualization, offering a unique collaboration. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Researchers from other disciplines, e.g., who study gene regulation, protein signaling, or perform crime and terrorism analysis, etc., have the potential to be benefited by the proposed software. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The goal of this proposal is to develop, test, and deploy interactive visualization tools for scientific study of brain circuits. The tools will help brain researchers view brain circuits at multiple scales and perform sophisticated analysis of research hypotheses. The team members have a decade of experience developing scientific visualization tools for scientific users and consist of experts in cognitive science, neuroscience, computer science, and visual design. They are well qualified to conduct the project.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #2===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project is focused on an area of research that spans several disciplines that will be able to utilize the cyber infrastructure software that will be developed. The PI&#039;s have a proven record of accomplishment in prior research projects. The project has intellectual merit and will have a broad impact by providing an enhanced set of software tools that will facilitate the efforts of researchers doing work related to studying and modeling the brain.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #3===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This unique collaboration between brain scientists and data visualization scientists promises to offer tremendous benefits to the scientific community. The software that will be developed, will allow researchers to understand the signal pathways in the human brain in more detail than ever before. The software will be analyzed through a rigorous process, using models of cognition.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Review #4===&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is an interdisciplinary project and the target user community is brain scientists. The tools will be made available to the public and are expected to benefit the entire brain science research community as well as other disciplines studying linked types of data. The tools can also be used in classes to help students understand connectivity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I can see the proposed work will be valuable for brain scientists to study connectivity and dynamics of neural circuits in intact brain as existing systems all have limitations and cannot satisfy the needs of brain scientists as discussed in the proposal. Other scientific domains may need similar tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal focuses on an interesting problem for which it also provides a novel solution. Therefore, I think this is a quality proposal and worthy of support.&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5475</id>
		<title>CS295J/Review Response</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5475"/>
		<updated>2011-10-04T14:10:33Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Full Proposal (SI2-SSI) Review and Response ==&lt;br /&gt;
&lt;br /&gt;
Dear reviewers&lt;br /&gt;
&lt;br /&gt;
Thank you so much for your insightful, constructive, and helpful comments.  We particularly appreciated the positive evaluation of the proposal and the potential of our methods for enhancing visual data analysis with regards to brain functioning.  &lt;br /&gt;
&lt;br /&gt;
We are overwhelmingly grateful to have received a &#039;competitive&#039; summary evalution.&lt;br /&gt;
&lt;br /&gt;
We have noted and addressed the criticisms below in the context of the reviews received.  Our responses and details concerning our revisions are in bold.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
&lt;br /&gt;
David Laidlaw et al.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;1.&#039;&#039;&#039; &#039;&#039;The panel&#039;s discussion focused on the sustainability issue beyond the end of the project support timeframe. While a section of the proposal talks about the Outreach, Education, and Sustainability Plan, the community outreach and sustainability aspects are treated somewhat cursorily. These two issues are closely related: without community support, the software is unlikely to be sustainable in the long run. On the other hand, the proposers appear to be well known in their field, which may enhance community uptake. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As added in section G, the plan to create long-term sustainability now includes more detail. Specifically we will focus on creating a lasting open source community that provides frequent updates to the code centralized by the core research team following a Macro R&amp;amp;D development infrastructure.&#039;&#039;&#039; ([[User: Stephen Brawner | Stephen Brawner]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;2.&#039;&#039;&#039; &#039;&#039;The proposal was perhaps over-ambitious; aspects of the evaluation (for example, eye tracking) will create large amounts of data that will require correspondingly intensive data analysis. However, the panel felt that even if the project did not accomplish every detail of the proposal, it would still be highly worthwhile. Similarly, while some aspects of the project might be seen as risky, a certain amount of risk is acceptable in NSF proposals, or even expected. Moreover, any risk is mitigated by the qualifications of the PIs, as exemplified by their excellent track record. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;The proposers have laid out a detailed five year plan. One panelist questioned whether sufficient attention had been paid to issues of sustainability, in particular there was no mention made of plans for software support beyond the end of the five year plan. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1&#039;&#039;&#039; ([[User: Stephen Brawner|Stephen]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;4.&#039;&#039;&#039; &#039;&#039;The proposers see this as sort of a prototype of the way software could be developed to support other scientific endeavors; this effect appears to constitute most of the Broader Impact (I wouldn&#039;t count benefit to &amp;quot;the entire brain science research community&amp;quot;, mentioned in the BI statement, as broader impact). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;5.&#039;&#039;&#039; &#039;&#039;One quibble I have in the Management and Coordination Plan is the statement that &amp;quot;The Stanford researchers will visit Brown if face-to-face interactions become necessary.&amp;quot; While electronic communication allows collaboration in ways that would not have been possible before, I think it&#039;s not a question of whether face-to-face will be necessary, but how often. Fortunately, this appears to have been built into the travel budget.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As noted by the reviewer, face to face communication will be necessary and not an &#039;if&#039; but a &#039;when&#039;. Researchers at Brown will plan to coordinate visits in California and vice-versa.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;6.&#039;&#039;&#039; &#039;&#039;As the proposal is for SI2-SSI, I would like to know more on what the team plans to do to ensure the sustainability of the software and develop open-source community support. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;7.&#039;&#039;&#039; &#039;&#039;It is also unclear what the team will do to integrate diversity into the proposed activity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-proposal (Expeditions) Reviews and Response ==&lt;br /&gt;
:&#039;&#039;Proposal Number: 1064261&lt;br /&gt;
:&#039;&#039;Panel Summary for Expeditions Preliminary Proposals &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Preliminary Proposal Summary (Vision/Goals of the Expedition) &lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnected data such as crime and terrorism analysis. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit &lt;br /&gt;
:&#039;&#039;The authors of this proposal are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception.&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;1. &#039;&#039;&#039;However the proposal does not elaborate on how the PIs would use heuristic knowledge of artists and visual designers; this is only mentioned in the introduction but never developed in the proposal. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have added a section called &amp;quot;Evaluating Predictive Models&amp;quot; that outlines performance metrics the system will predict, e.g., task completion time, number of insights (see Saraiya et al., 2005), and methods for predicting user state and goals from observed user interactions and eye-tracking.  We plan to collect and compare heuristic design knowledge with predictive models of visual-search behavior using eye-tracking; one expected contribution will be validating or revising existing design guidelines.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Overall the panel had difficulty coming to a common understanding of the proposal contents. The range of reviews mirrored the range of what people had read into the proposal and their enthusiasm for the proposal topic. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;That said, the proposal failed to convince the reviewers along a number of dimensions. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;2. &#039;&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. Utilizing cognitive principles to inform this research is applauded, but we expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance.&lt;br /&gt;
&#039;&#039;&#039;We have added a section called &amp;quot;Task Analysis&amp;quot; that outlines how we will analyze and encode tasks and sub-tasks that prime or conflict them.  In a subsection titled &amp;quot;Applications&amp;quot;, we outline the design of a software module that facilitates analyst &#039;goal maintenance&#039; by collecting user input, predicting goals from this, and displaying suggestions for similar or priming sub-goals.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ] &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; &amp;quot;We have included a plan for implementing cognitive principles. Strategies (through eye-tracking and other means outlined above) will include monitoring users&#039; preferences and cognitive strategies through reaction time data and verbal reports from users. Historically, such strategies have proved successful when gathering user information, input, and comprehension; we expect the same in our implementation. We also intend to use eye-tracking and reaction time procedures in order to best gauge cognitive load to the extent that it affects user experience, as we strive to understand user adaptation and navigation strategies within our applications.&amp;quot;&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;3. &#039;&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how performance is going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;4. &#039;&#039;&#039;The proposal needs to be much more explicit about the techniques used to derive predictions including the track records of these techniques and the ways in which these techniques may need to be enhanced to be used in particular application domains. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 1.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;5. &#039;&#039;&#039;Another concern is methodological. We suggest that the team identify benchmark tasks that would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Original: We have identified a set of benchmark tasks to evaluate the users’ cognitive skills when working with our interface. These tasks test how well the interface increases users’ performance in the following cognitive aspects: multi-tasking, task-switching, visual search, reasoning, decision-making and cognitive workload. More detailed description of the design of these tasks, including experiment procedure outline, involved equipments and techniques, and data analysis methods, can be found in section e.5 (Formal Testing).&#039;&#039;&#039; ([[User: Hua Guo|Hua]], Sep.22, 2011)&lt;br /&gt;
&#039;&#039;&#039;Revised: We will break down the brain circuitry exploration process into a set of simpler cognitive tasks through cognitive task analysis. Benchmark tests that involve solving simple problems related to brain circuitry analysis will then be designed based on the identified cognitive tasks. These benchmark tests can then be used to evaluate the users&#039; cognitive performance when working with our system in comparison with existing systems. ([[User: Hua Guo|Hua]], Sep.27, 2011)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; As the brain exploration process may involve essential complex tasks that are not able to break down into simpler tasks, we choose to evaluate the tool qualitatively over a long-term period: observe and interview users regarding their experience with the tools in order to discover critical features that the existing tool is missing. The researcher will also provide technical support if necessary. ([[User:Diem Tran|Diem Tran]] 21:22, 28 September 2011 (EDT))&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impacts &lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;6. &#039;&#039;&#039;The panel thought that the proposal team could do more work in considering possibilities for broader societal impact and for designing more impactful outreach and education activities that would extend beyond the Brown community. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have highlighted some of the broader implications for our project, such as making tools publicly available for other communities of brain scientists, medical researchers, and even instructors to learn about brain connectivity through our visualization initiatives. We have also outlined plans to develop tools for elementary, middle, and high school students as they explore the brain, its anatomy, and its capabilities.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Rationale for the Recommendation &lt;br /&gt;
:&#039;&#039;Despite enthusiasm for this topic and the potential for significant impact if successful, the panel could not support the pre-proposal at this time due to a lack of coherent vision and a realizable plan of implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Put an X next to the appropriate category &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Invite &lt;br /&gt;
:&#039;&#039;Invite-if-possible &lt;br /&gt;
:&#039;&#039;Do-Not-Invite	 X &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This summary was read by/to the panel, and the panel concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Do Not Invite&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;Rating: &#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnect data such as crime and terrorism analysis.	The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;The leadership team of distinguished faculty and researchers is well qualified to conduct this research. The entire team is an appropriate blend of researchers representing all of the subordinate areas of research. &lt;br /&gt;
:&#039;&#039;The quality of prior work from all participating members is uniformly outstanding and appropriate for this endeavor. &lt;br /&gt;
:&#039;&#039;While there have been much work of a similar nature done in the past, this project would move our knowledge forward on a number of new fronts. In addition, the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception is novel. &lt;br /&gt;
:&#039;&#039;The proposal is well conceived and organized and clearly presented. &lt;br /&gt;
:&#039;&#039;With the pupil tracking device requested in the budget, there appears to be sufficient access to all the required resources necessary for this undertaking. &lt;br /&gt;
:&#039;&#039;There seems to be a wealth of available experimental facilities in the associated institutions. Some of the relevant equipment includes tiled display walls, stereo-enabled desktop displays, ultra-high-resolution Wheatstone stereoscope, haptic devices, and a virtual-reality cave to come online later this year.	&lt;br /&gt;
:&#039;&#039;The leadership plan appears to be appropriate for the small size of the project personnel. The team members have a good history of interaction on related academic activities. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;7. &#039;&#039;&#039;Other than the nominal support for traditional computing need by the computer science department at Brown, there did not appear to be any specific institutional support for this proposal. &lt;br /&gt;
:&#039;&#039;The budget is well thought out, clearly described, and justified. &lt;br /&gt;
:&#039;&#039;The collaboration amongst the faculty of Brown, Stanford, and the Rhode Island School of Design appears to be very appropriate for this project. The project would bring together cognitive scientists, visualization experts, and other domain specialists to bridge the gap between theory and practice in this area of brain research. Clearly the synergy in this group would help to insure the success of this work. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;8. &#039;&#039;&#039;The value of the proposed work appears to be very important in this specific area of research. While this is not my specific area of expertise, I have some concern whether this project meets the over arching goals of the Expeditions in Computing Initiative. In particular, it would seem to have the potential to stimulate interest in this area for graduate students in related disciplines, but may not be very successful in drawing attention to STEM studies amongst the K-12 age group. &lt;br /&gt;
&#039;&#039;&#039;Addressed by response to comment 6.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara Kliman-Silver]] ]&lt;br /&gt;
:&#039;&#039;A major focus of this proposal is the conceptualization, design, and development of a software framework for predicting user performance. It would gather information on specific models, user interfaces, and user goals and endeavor to produce probabilistic estimates of the state of users over time as predicted by the models. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;9. &#039;&#039;&#039;I did not see anything in the proposal that indicated that it would be of particular interest to youth and underrepresented groups. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;10. &#039;&#039;&#039;The single sentence on stimulating effective knowledge transfer did not seem convincing. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a good general research proposal in the area of brain research and the understanding of interconnected relationships of complex data sets. There are some novel research concepts, including the heuristic knowledge of artists and visual designers related to cognition and perception. The leadership team is well qualified to lead this endeavor and the outlook for good research results look promising. The proposed budget is in line with the proposed activities and personnel commitments. This proposal should fair well as a general unsolicited proposal for NSF. I would rank this proposal to be better than many of the other proposals of the Expeditions in Computing Initiative.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;Rating: Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit: The main goal is to push the frontiers of data visualization, with secondary thrusts on improved learning in cognition about how we look at data. The strengths of the proposal are that the proposed tool is, as far as I know, ground-breaking in that it will actively change based on the user. Further, the researchers are well qualified, with expertise in the psychology as well as in the computer science. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;11. &#039;&#039;&#039;The major weakness was that I was not really clear how the software tool would eventually work. For instance, they talked a little about following pupils of the user - but I was not clear how they would capitalize off of that knowledge. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;As mentioned in the response to the introduction of the pre-proposal, we have expanded out plan about how we will incorporate pupil data from eye-tracking into creating an interface that was best optimized for data analysis ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Value added: &lt;br /&gt;
:&#039;&#039;I think the proposed research could be complex and important enough to warrant this investment - however I could not find a clearly defined path of attack in the proposal. It fits well within the 3 program goals, with probably the greatest emphasis on the first. Intelligent data visualization tools that can react to the user will open many new doors in understanding science. It will impact and inspire future computer scientists, although I do no see a preference for underrepresented groups. Finally, it has the potential to stipulate new significant findings in science and in education. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership plan: &lt;br /&gt;
:&#039;&#039;The leadership plan seemed well thought out. They have a diverse set of researchers, each with their unique skill set. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impact: The work will be distributed to all who want to use it and will be used in classes at Brown, affecting students at all levels (either as developers or clients). I found their vision here a little short-sighted. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In their proposal entitled &#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools,&#039; the authors propose to use understandings from cognitive science, neuroscience and human - computer interaction to develop better tools for examining data. In particular, they will develop software to that will visualize neural connections in the brain. At the same time they will actively measure the client and use these data to predict what the client will want to see next. They present a compelling case that we need better visualization tools for understanding the brain.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;Rating: &#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The dominant approach to developing interactive systems is for the developers to interact with the envisioned users to gather the general requirements for the application and to construct software based on the developer&#039;s intuitions as to how the users will actually interact with the software. Depending on the sophistication of the organization, cycles of usability testing and re-design are used to refine the interface; alternately, they may simply release the software to the users and wait for the complaints or lack of sales. &lt;br /&gt;
:&#039;&#039;An alternative to this expensive process is to construct explicit models of the perceptual and decision making processes of the users and then use these models to inform the design process. Work on cognitive models such as GOMS and ACT began about three decades ago and has progressed slowly but steadily throughout the time period and there have been a number of small demonstrations that such models can, in fact, eliminate most or all of the iteration previously required. &lt;br /&gt;
:&#039;&#039;The authors of this are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing of neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. &lt;br /&gt;
:&#039;&#039;Although the focus of the recent work in cognitive models has been to develop engineering models which are capable of being used outside of the research setting, use of such models in the design of interactive systems has been slow to catch on. If nothing else, construction of the models requires a large amount of intellectual labor and, to date, impressive examples of the use of these models to justify that labor investment have been rare. This work has the potential for providing such a critical example and could be the impetus to finally move cognitive models into widespread use. &lt;br /&gt;
:&#039;&#039;Additionally, work in as complex an area as brain science will ensure that the cognitive modeling tools can handle nearly any application. &lt;br /&gt;
:&#039;&#039;Finally, if the models do result in improved tools, the research may result in new findings in the brain science field. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;12. &#039;&#039;&#039;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process. There is no guarantee that the computer scientists and psychologists doing this research will be able understand and model the cognitive processes of a brain scientist. &lt;br /&gt;
&#039;&#039;&#039;We have included our experience of developing visualization tools for brain study at Brown, and also we have a detailed timeline, in each phase, brain scientists will be involved in the process of model evaluation, testing, and providing feedback.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Value-added of funding the activity as an Expedition &lt;br /&gt;
:&#039;&#039;This work requires substantial commitment on the part of the computer scientists and psychologists to learn the brain science domain and on the part of the brain scientists for their interaction with the cognitive scientists. Such a commitment is unlikely to be obtained with smaller, more fragmented funding. Industry or venture capital are unlikely to fund this kind of research. &lt;br /&gt;
:&#039;&#039;The main knowledge transfer methods will be the mentoring of graduate students and the addition of formal courses intended to teach about interdisciplinary collaboration. In addition, the software they develop will be made available for distribution. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;13. &#039;&#039;&#039;Except through the rather limited vehicle of scholarly publication, it is not clear how the cognitive models themselves are to be made available. The authors may want to consider using their own visualization capabilities to explain the models. &lt;br /&gt;
&#039;&#039;&#039;On page x, section y, we have included preliminary visualizations, which are examples of the visualizations we will implement to explain the cognitive models we intend to incorporate in our research.&#039;&#039;&#039; [ [[User:Clara Kliman-Silver|Clara]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;Only two institutions are involved in this work and the senior researchers are all located at one of the two institutions. This should minimize coordination problems. &lt;br /&gt;
:&#039;&#039;Funding for the primary brain scientist in this research is at the 50% level and his supervisor has a nominal level of funding. This is a cause for concern, given the level of commitment required to support what is, essentially, someone else&#039;s area of research. A higher level of funding would be desirable, even if a substantial amount of the funded time is spent on pure brain science research. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal includes training a number of graduate and post doctoral students; in fact, most of the funding requested is for student support. &lt;br /&gt;
:&#039;&#039;As mentioned earlier, to the extent this work results in wider acceptance and usage of cognitive models, particularly in the development of scientific software, it will accelerate the construction of interactive systems which can be used efficiently. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This work should lead signficantly wider use of cognitive modeling in interactive systems design as well as provide researchers in brain science with superior tools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #4&lt;br /&gt;
:&#039;&#039;Rating: &#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal seeks to develop new visualization techniques that will assist brain scientists with the interpretation of high-dimensional data. For this purpose, the PI will incorporate design principles and knowledge from cognitive science, neuroscience and human computer interaction. the visualization system will also capture data of scientists as they use the tool, and compare it with computational models from cognition, perception and art. The tool will also be able to predict user performance and user state over time. The tool will be released through an open-source license, and will be incorporated into two courses. The team has worked together for a number of years. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 1. What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The kind of tool envisioned in this proposal would be invaluable, not only in neuroinformatics but also on other disciplines that deal with high-dimensional, multi-scale data, from social networks to geospatial information. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Despite its laudable objective, this work is not ready for further scrutiny as a full proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;14. &#039;&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. How are these principles going to be used to design a better visualization, and how are they going to be tested? I very much like the idea of using cognitive principles, but would have expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance: &amp;quot;We will use these principles to determine which tasks to make easily accessible to users and which to put in the background.&amp;quot; This is a general problem for interface design, not a solution to the problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Reviewer 4 indicated that the proposal was not explicit about how cognitive principles would be implemented in the project. In section ___ on page __, we have elucidated how the principles listed will concretely lead to better, more optimized visualizations, as well as the studies we plan to perform to show that the visualizations are indeed efficient. ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;15. &#039;&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. This is a wild extrapolation. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how is performance going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have made a detailed timeline, and in phase/year 3, we have included the way to measure user performance and evaluate the cognitive models. We evaluate the models prediction against actual user data, in addition to eye-tracking, computer interaction logging, video logging and skin conductance response will be adopted, and the results and help refine the models.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;16. &#039;&#039;&#039;Another concern is methodological. Say that the proposal had articulated a clear plan and a reasonable set of deliverables for a new generation of visualization interfaces. Wouldn&#039;t it be better to test this interface on some benchmark problems, and see how it facilitates performance relative to a standard interface? These benchmark problems would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We mainly test the user interface performance by empirical ways. But they are based on the principles of perception and goal selection .  We also have included some standards like performance time of an average user completing a unit task . We will include a comparison between the past and present interface in one unit task performance test.  ( --- [[User:Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
:&#039;&#039;To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is very ambitious in its overall objectives. A visualization tool having the characteristics suggested in the proposal would be invaluable to brain science as well as to other scientific disciplines dealing with high-dimensional complex data, such as genomics/proteomics, geospatial analysis, network analysis, etc. However, the proposal fails to turn a high-level concept into a realizable implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;Our framework allows more modules to be involved in as the system evolves in future.  The interface facilitating viewing data and the model to make prediction can be both applied to other disciplines .( --- [[User:Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
:&#039;&#039;Is the work of sufficient import, scale, and/or complexity to justify this type of investment? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The brain is one of the scientific frontiers for the 21st century. The proposal has the complexity and scale worthy of this type of investment, but the proposal fails to deliver a realistic plan (if any plan at all) or even specifications. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Will the work contribute to realization of the EIC program goals and is it likely to demonstrate completion of these goals? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Understanding the brain is one of our greatest scientific challenges. Unfortunately, without a clear research plan it is difficult to asses the likelihood that the proposal will be able to demonstrate completion of its overall goals. &lt;br /&gt;
&#039;&#039;&#039; We have added a new section named &amp;quot;Detailed Plan&amp;quot; which describes in details our proposed plan throughout the project duration. We divide the plan into milestones, allocate manpower and budget, and provide tentative schedule for each milestone. [ [[User:Diem Tran|Diem Tran]] ] &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Value of the experimental systems or shared experimental facilities proposed &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators will utilize some shared facilities in their research and will share software and data that they produce to allow further research by others. The proposed software testbed will be used across the collaborators to test models of cognition and perception in the context of HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators have worked together for a number of years, have taught classes together, and their students have attended classes from each other. No leadership or collaboration plan is discussed beyond this. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 2. What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;17. &#039;&#039;&#039;Societal benefits of this tool would derive from its scientific merit to the extent that it would help understand the brain. Given the characteristics of this project, I wonder if other NSF funding opportunities would be more suitable, such as the FODAVA program or the interdisciplinary program in neuroscience at CISE. I also wonder whether this work should be funded instead by NIH (NIBIB, NIMH). The budget contains a request for $3,000 to cover costs of animal (mouse) care; why is this needed given that the proposal is for software development? &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;While the scientific merit of this project is most directly applicable to understanding the brain, it should be noted that the goal to “convincingly demonstrate that the employed techniques facilitate better analysis” (described in section c.1) is unique to this proposal, and would have impacts that reached into any other discipline in which vast amounts of data needs to be interpreted. [ [[User:Michael Spector|Michael Spector]] ]&lt;br /&gt;
&#039;&#039;&#039;In addition, it is our hope that much of the development that we do in creating an interface and visualizations that adapt to the user needs will be easily implemented into a much wider range of programs not only across disciplines, but also educational levels, allowing students to find a way to learn and explore that best fits their learning style and needs. [ [[User:Jenna Zeigen|Jenna Zeigen]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. The tools would allow brain scientists to explore high-dimensional data, and the tool would also predict user performance and state. The proposal is inspired by principles from cognitive science, neuroscience and HCI. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. The proposal does not provide a leadership or collaboration plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #5&lt;br /&gt;
:&#039;&#039;Rating: Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PROPOSAL OBJECTIVES AND APPROACH &lt;br /&gt;
:&#039;&#039;The proposal develops a variety of tools for interactive analysis and reasoning for brain scientists. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT &lt;br /&gt;
:&#039;&#039;The project addresses research in three areas: human-computer interaction, cognitive modeling and the connectivity in the brain. It lists11 items that are to be developed. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;18. &#039;&#039;&#039;The proposal is vague and has a lot of repeatability. It is not well written. It is not clear what research experiments are performed. &lt;br /&gt;
&#039;&#039;&#039; As described in the proposal, we apply cognition and visualization principles to develop a brain circuits and connectivity visualization tool. We evaluate our tool using methodologies established in cognitive science, psychology and human computer interaction research. Detailed explanations of our approach are addressed in comments to review 1, 2, 5, 14, 16. [ [[User:Diem Tran | Diem Tran]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The team is fine. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;There is a great need for the tools by the computational neuroscience and cognitive science community. This project will develop some of these tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;19. &#039;&#039;&#039;The details of the educational plan are not given. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;As noted on page 26 (section g, &amp;quot;Outreach, Education, and Sustainability Plan&amp;quot;), we plan to educate users on the use of our developed tool through the same channels as our outreach efforts; namely, to the visualization, interaction, and brain science communities. We also note that we will deploy our tool in academic settings, such as labs and classrooms in the brain sciences, which will benefit those groups from an educational point of view, as well as provide another avenue through which we can receive feedback on our software. [ [[User:Michael Spector|Michael Spector]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal can be strengthened by focusing and making the challenges and ideas more clear.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #6&lt;br /&gt;
:&#039;&#039;Rating: Poor&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity of developing and improving neuroscience visual analysis tools is very important. Currently access to the genre of systems described in this proposal, especially in areas with complex sub-structure such as neuroscience, is lacking and the proposed activity could have a profound effect on the state-of-the-art in the field. The possible interplay between the user-interface experts and biomedical informatics developers is a possible strength. The available team and resources are very strong and capable with an excellent track record in the field. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;20. &#039;&#039;&#039;However, the proposed activities within the proposed are completely underspecified. Given the scale of the project and the challenging nature of the problems to be faced, the level of technical detail and planning presented in the proposal is insufficient and unconvincing. A great many claims are made in the proposal with no clear measurable end-points to determine how success within the project could be evaluated. Specifically, the authors claims to employ developments in concepts of cognition and perception to assist scientists reasoning. What aspects of reasoning? For which tasks? Within which discipline? If this is only related to tractography, what sort of scientific hypotheses to the applicants expect to address? Are they relating these representations of neural connectivity to studies in animals? Are they relating these analyses to other modalities of imaging data? How do they intend to reason over the complex semantics of these other experimental types? These are all glaring omissions from the proposal. The description of the cognitive science aspects of the project were marginally better specified, but I still found the details lacking. For example, the claim was made that &#039;our system will tune itself to individual work styles. How? What technical elements will the system exploit to accomplish this? In particular, the applicants must spend more time specifying the precise tasks that the system is designed to tackle before it is possible to improve or optimize performance at that task. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Response: We have added a section called &amp;quot;Task Analysis,&amp;quot; in which we outline a plan for specifying visual analysis tasks through user observation. We will begin with loosely structured observations to generate hypotheses about primary tasks and user strategies in this domain, and continue with cognitive task analysis methods to focus and test these hypotheses. We present preliminary results that show the viability of this plan.  ([[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 10:48, 22 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;21. &#039;&#039;&#039;In particular, the statement &amp;quot;We expect the core element to evolve significantly through the five years of the project. It cannot be meaningfully defined without the data we will acquire from users, so details beyond this overview are not possible yet&amp;quot; is incredibly revealing and suggests that the applicants themselves do not have a clear idea of how they intend to solve these problems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have included a diagram to clarify the software architecture described in the proposal.  We implemented a simple version of the architecture as a proof of concept, as described in the section &amp;quot;Preliminary Work&amp;quot;.  Individual modules will be developed and adjusted with an iterative testing schedule.&#039;&#039;&#039;  [It would also be good to add, &amp;quot;This is similar to another problem/tool that worked successfully in this cited paper&amp;quot;, but I haven&#039;t found a good example yet.] [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;22. &#039;&#039;&#039;The Figures presented in the proposal looked confusing and uninformative, adding nothing to the argument that these systems would actually help a scientist understand the underlying data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;We have incorporated a new set of figures into the proposal, which are more focused on the concept and design of the proposed system; in particular, we have included some figures to demonstrate the prototype design for different views of the system. [ [[User:Hua Guo|Hua]] ]&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;I very much liked the proposed idea of using the system directly in courses taught at Brown and other collaborating institutions. I think that this is a sizable innovation that would be very welcome in the field and might even form the basis of evaluation metrics for the success of the system (which could address one of my previous criticisms of the project). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The activity is well integrated with training and learning. There are a number of students in the group, all of whom would have the opportunity to work with professionals from a very different focus. The interdisciplinary nature of the work coupled with the need for analysis tools in biology would be an excellent synergy to cultivate. &lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;23. &#039;&#039;&#039;The presence of high-end graphics equipment (such as a virtual-reality cave and haptic displays, etc.) is a plus for the project but also is a hindrance to enable the developers to release their work to a broader audience. If the system is only available to the small number of people who have access to such facilities, then the impact of the work would be lessened.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We have expanded our dissemination plan to include multiple distributions for different end-user workstations.  Modules that require high-end hardware for online data capture (e.g., eye-tracking) or display will be disabled in the basic distributions for Windows and Mac.  Our research agenda includes experiments with this equipment -- for instance, to validate cognitive models or design guidelines -- but it is not necessary for most end users who download releases of the tool.&#039;&#039;&#039; [ [[User:Steven Gomez|Steven Gomez]] ]&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;24. &#039;&#039;&#039;The proposed activity makes no specific claims to target or support underrepresented groups explicitly. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;25. &#039;&#039;&#039;The underspecification of the technical aspects of the project undermine the open-source distribution of the code. It is technically demanding to generate usable open source products for other people to use. Notably, browsing the co-PIs webpages, there were no easily accessible open-source software products noticeably available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;- Since the review, the PI&#039;s lab has published a large selection of source code from previous projects, available from its website (vis.cs.brown.edu), with provisions for multiple platforms. &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;- We are undertaking a project to make anonymized images of healthy and abnormal brains (and related data), from our projects and those of a number of collaborators worldwide, freely available to the scientific community. &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;- Development will be done using a publicly visible source code repository (e.g., Github), so that other scientists and the public may be able to track progress, comment on changes, provide feedback, and even contribute pieces of code to the software. &amp;lt;br /&amp;gt;&lt;br /&gt;
[ [[User:Nathan Malkin|Nathan]] ]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Although the high-level conceptualization of this project is exciting, the way that the project was described in the proposal was massively underspecified. Technical details were lacking and some fundamental aspects of the project&#039;s conceptualization in terms of the scientific domain under study were missing. There was no timetable, and no evaluation proposed to see how progress would be measured. The authors should be careful about making high-level claims concerning the possible impact of the proposed without a more carefully constructed argument to back up the claims.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Removed Chunks of the Proposal (not included in the response letter)&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;09/10/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This Proposal has been Electronically Signed by the Authorized Organizational Representative (AOR).&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Division of Computer and Communication Foundations&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Mitra Basu&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8649&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mbasu@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
&lt;br /&gt;
== Unused Parts from Proposal ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT (INCLUDING POTENTIAL TRANSFORMATIVE ASPECTS): &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel noted that the proposal was clear and comprehensive, and addressed the criteria for SI2 proposals as well as the general NSF criteria. The proposed software would enhance both visualization of data on brain function and the knowledge discovery process of researchers in this area. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;BROADER IMPACTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel believes that the framework of this project would be portable to such other fields as gene regulation and the analysis of other complex networks. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;==========================&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Panel Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;ADDITIONAL REVIEW CRITERIA: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed software would primarily be of use to brain researchers. Other fields would be impacted indirectly, in the sense that if this way of building software packages combining visualization with support for hypothesis testing and tracking of analyses succeeds, it might provide a pattern for those fields to follow in their own software development processes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;PANEL RECOMMENDATION (CHECK ONE): &lt;br /&gt;
:&#039;&#039;[X] Competitive (C) &lt;br /&gt;
:&#039;&#039;[ ] Not Competitive (NC) &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;This panel summary was read by panelists who participated in the discussion of this proposal, and they concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Competitive&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;SYNTHESIS COMMENTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel agreed that this was a highly competitive proposal. In particular, the proposal attacks a large but tractable problem, and the team&#039;s wide and deep expertise gives the project a high probability of success. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SSI proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A five year project to develop software for visualization and analysis of brain circuitry by working with brain researchers to analyze their cognitive processes that need to be supported. The software is to help link the visualization workflow to a &amp;quot;decisional&amp;quot; workflow, supporting &amp;quot;reasoning and analysis at a high level, rather than just displaying data.&amp;quot; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The development phase will include studies of user interaction at both low level (eye tracking, mouse click logs) and high level (decision making). &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The PI&#039;s of this project are proposing to develop, test, and deploy software tools for scientific study of brain circuits. The project will focus on building a cyber infrastructure software system that is intended to improve the speed at which those doing brain research are able to complete their data analysis and it will advance the understanding of human cognition. This project has potential to have impact in the way researchers in the field collect and analyze data by providing an a rich set of cyber infrastructure tools for use in studying and modeling brain circuits. The intellectual merit of the project is very high as it has the potential to greatly reduce the time required to collect and analyze data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project will provide an enhanced set of software tools to researchers in several areas that are conducting research that is related to the human brain. Areas of research that the cyber infrastructure software can be used in included: gene regulation, protein signaling and even crime and terrorism analysis and all have the potential to benefit. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;While this work is mostly outside my domain of expertise, it looks like a well chosen problem and an interdisciplinary effort. In particular, the team appears to have the broad range of people they would need to pull this off, from experts from the target audience (who suggested the project, and is listed as a co-PI), to cognitive scientists and computer scientists. In addition to the Intellectual and Broader Impact criteria, the specific &amp;quot;additional criteria&amp;quot; listed in the Program Solicitation have been explicitly and (to the extent that I can tell) well addressed. The required supplemental documents address the required points as well. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity will allow brain scientists to visualize brain functions more easily and in much more detail than in the past.Network models, coupled with sophisticated methods for dimensionality reduction, promise to offer unique insights into the workings of the human brain. Moreover, the proposed visualization tool will go through rigorous evaluation that will allow its constant improvement. The proposers form a very strong group of well-established researchers in brain science and data visualization, offering a unique collaboration. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Researchers from other disciplines, e.g., who study gene regulation, protein signaling, or perform crime and terrorism analysis, etc., have the potential to be benefited by the proposed software. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The goal of this proposal is to develop, test, and deploy interactive visualization tools for scientific study of brain circuits. The tools will help brain researchers view brain circuits at multiple scales and perform sophisticated analysis of research hypotheses. The team members have a decade of experience developing scientific visualization tools for scientific users and consist of experts in cognitive science, neuroscience, computer science, and visual design. They are well qualified to conduct the project.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project is focused on an area of research that spans several disciplines that will be able to utilize the cyber infrastructure software that will be developed. The PI&#039;s have a proven record of accomplishment in prior research projects. The project has intellectual merit and will have a broad impact by providing an enhanced set of software tools that will facilitate the efforts of researchers doing work related to studying and modeling the brain.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This unique collaboration between brain scientists and data visualization scientists promises to offer tremendous benefits to the scientific community. The software that will be developed, will allow researchers to understand the signal pathways in the human brain in more detail than ever before. The software will be analyzed through a rigorous process, using models of cognition.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Review #4&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:  Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is an interdisciplinary project and the target user community is brain scientists. The tools will be made available to the public and are expected to benefit the entire brain science research community as well as other disciplines studying linked types of data. The tools can also be used in classes to help students understand connectivity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I can see the proposed work will be valuable for brain scientists to study connectivity and dynamics of neural circuits in intact brain as existing systems all have limitations and cannot satisfy the needs of brain scientists as discussed in the proposal. Other scientific domains may need similar tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal focuses on an interesting problem for which it also provides a novel solution. Therefore, I think this is a quality proposal and worthy of support.&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5355</id>
		<title>CS295J/Review Response</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5355"/>
		<updated>2011-09-27T17:22:04Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Full Proposal (SI2-SSI) Review and Response ==&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;06/14/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Office of CyberInfrastructure&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Manish Parashar&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8970&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mparasha@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number: 1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary: &lt;br /&gt;
:&#039;&#039;Panel Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT (INCLUDING POTENTIAL TRANSFORMATIVE ASPECTS): &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel noted that the proposal was clear and comprehensive, and addressed the criteria for SI2 proposals as well as the general NSF criteria. The proposed software would enhance both visualization of data on brain function and the knowledge discovery process of researchers in this area. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel&#039;s discussion focused on the sustainability issue beyond the end of the project support timeframe. While a section of the proposal talks about the Outreach, Education, and Sustainability Plan, the community outreach and sustainability aspects are treated somewhat cursorily. These two issues are closely related: without community support, the software is unlikely to be sustainable in the long run. On the other hand, the proposers appear to be well known in their field, which may enhance community uptake. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal was perhaps over-ambitious; aspects of the evaluation (for example, eye tracking) will create large amounts of data that will require correspondingly intensive data analysis. However, the panel felt that even if the project did not accomplish every detail of the proposal, it would still be highly worthwhile. Similarly, while some aspects of the project might be seen as risky, a certain amount of risk is acceptable in NSF proposals, or even expected. Moreover, any risk is mitigated by the qualifications of the PIs, as exemplified by their excellent track record. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;BROADER IMPACTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel believes that the framework of this project would be portable to such other fields as gene regulation and the analysis of other complex networks. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;ADDITIONAL REVIEW CRITERIA: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed software would primarily be of use to brain researchers. Other fields would be impacted indirectly, in the sense that if this way of building software packages combining visualization with support for hypothesis testing and tracking of analyses succeeds, it might provide a pattern for those fields to follow in their own software development processes. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposers have laid out a detailed five year plan. One panelist questioned whether sufficient attention had been paid to issues of sustainability, in particular there was no mention made of plans for software support beyond the end of the five year plan. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SYNTHESIS COMMENTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel agreed that this was a highly competitive proposal. In particular, the proposal attacks a large but tractable problem, and the team&#039;s wide and deep expertise gives the project a high probability of success. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PANEL RECOMMENDATION (CHECK ONE): &lt;br /&gt;
:&#039;&#039;[X] Competitive (C) &lt;br /&gt;
:&#039;&#039;[ ] Not Competitive (NC) &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This panel summary was read by panelists who participated in the discussion of this proposal, and they concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Competitive&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SSI proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A five year project to develop software for visualization and analysis of brain circuitry by working with brain researchers to analyze their cognitive processes that need to be supported. The software is to help link the visualization workflow to a &amp;quot;decisional&amp;quot; workflow, supporting &amp;quot;reasoning and analysis at a high level, rather than just displaying data.&amp;quot; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The development phase will include studies of user interaction at both low level (eye tracking, mouse click logs) and high level (decision making). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposers see this as sort of a prototype of the way software could be developed to support other scientific endeavors; this effect appears to constitute most of the Broader Impact (I wouldn&#039;t count benefit to &amp;quot;the entire brain science research community&amp;quot;, mentioned in the BI statement, as broader impact). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;While this work is mostly outside my domain of expertise, it looks like a well chosen problem and an interdisciplinary effort. In particular, the team appears to have the broad range of people they would need to pull this off, from experts from the target audience (who suggested the project, and is listed as a co-PI), to cognitive scientists and computer scientists. In addition to the Intellectual and Broader Impact criteria, the specific &amp;quot;additional criteria&amp;quot; listed in the Program Solicitation have been explicitly and (to the extent that I can tell) well addressed. The required supplemental documents address the required points as well. One quibble I have in the Management and Coordination Plan is the statement that &amp;quot;The Stanford researchers will visit Brown if face-to-face interactions become necessary.&amp;quot; While electronic communication allows collaboration in ways that would not have been possible before, I think it&#039;s not a question of whether face-to-face will be necessary, but how often. Fortunately, this appears to have been built into the travel budget.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The PI&#039;s of this project are proposing to develop, test, and deploy software tools for scientific study of brain circuits. The project will focus on building a cyber infrastructure software system that is intended to improve the speed at which those doing brain research are able to complete their data analysis and it will advance the understanding of human cognition. This project has potential to have impact in the way researchers in the field collect and analyze data by providing an a rich set of cyber infrastructure tools for use in studying and modeling brain circuits. The intellectual merit of the project is very high as it has the potential to greatly reduce the time required to collect and analyze data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project will provide an enhanced set of software tools to researchers in several areas that are conducting research that is related to the human brain. Areas of research that the cyber infrastructure software can be used in included: gene regulation, protein signaling and even crime and terrorism analysis and all have the potential to benefit. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project is focused on an area of research that spans several disciplines that will be able to utilize the cyber infrastructure software that will be developed. The PI&#039;s have a proven record of accomplishment in prior research projects. The project has intellectual merit and will have a broad impact by providing an enhanced set of software tools that will facilitate the efforts of researchers doing work related to studying and modeling the brain.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity will allow brain scientists to visualize brain functions more easily and in much more detail than in the past.Network models, coupled with sophisticated methods for dimensionality reduction, promise to offer unique insights into the workings of the human brain. Moreover, the proposed visualization tool will go through rigorous evaluation that will allow its constant improvement. The proposers form a very strong group of well-established researchers in brain science and data visualization, offering a unique collaboration. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Researchers from other disciplines, e.g., who study gene regulation, protein signaling, or perform crime and terrorism analysis, etc., have the potential to be benefited by the proposed software. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This unique collaboration between brain scientists and data visualization scientists promises to offer tremendous benefits to the scientific community. The software that will be developed, will allow researchers to understand the signal pathways in the human brain in more detail than ever before. The software will be analyzed through a rigorous process, using models of cognition.&lt;br /&gt;
:&#039;&#039;Review #4&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The goal of this proposal is to develop, test, and deploy interactive visualization tools for scientific study of brain circuits. The tools will help brain researchers view brain circuits at multiple scales and perform sophisticated analysis of research hypotheses. The team members have a decade of experience developing scientific visualization tools for scientific users and consist of experts in cognitive science, neuroscience, computer science, and visual design. They are well qualified to conduct the project. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is an interdisciplinary project and the target user community is brain scientists. The tools will be made available to the public and are expected to benefit the entire brain science research community as well as other disciplines studying linked types of data. The tools can also be used in classes to help students understand connectivity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;As the proposal is for SI2-SSI, I would like to know more on what the team plans to do to ensure the sustainability of the software and develop open-source community support. It is also unclear what the team will do to integrate diversity into the proposed activity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;As added in section G, the plan to create long-term sustainability now includes more detail. Specifically we will focus on creating a lasting open source community that provides frequent updates to the code centralized by the core research team following a Macro R&amp;amp;D development infrastructure.&#039;&#039;&#039; ([[User: Stephen Brawner | Stephen Brawner]])&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I can see the proposed work will be valuable for brain scientists to study connectivity and dynamics of neural circuits in intact brain as existing systems all have limitations and cannot satisfy the needs of brain scientists as discussed in the proposal. Other scientific domains may need similar tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal focuses on an interesting problem for which it also provides a novel solution. Therefore, I think this is a quality proposal and worthy of support.&lt;br /&gt;
&lt;br /&gt;
== Pre-proposal (Expeditions) Reviews and Response ==&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;09/10/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This Proposal has been Electronically Signed by the Authorized Organizational Representative (AOR).&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Division of Computer and Communication Foundations&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Mitra Basu&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8649&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mbasu@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number: 1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary: &lt;br /&gt;
:&#039;&#039;Panel Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary for Expeditions Preliminary Proposals &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Preliminary Proposal Summary (Vision/Goals of the Expedition) &lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnected data such as crime and terrorism analysis. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit &lt;br /&gt;
:&#039;&#039;The authors of this proposal are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. However the proposal does not elaborate on how the PIs would use heuristic knowledge of artists and visual designers; this is only mentioned in the introduction but never developed in the proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Overall the panel had difficulty coming to a common understanding of the proposal contents. The range of reviews mirrored the range of what people had read into the proposal and their enthusiasm for the proposal topic. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;That said, the proposal failed to convince the reviewers along a number of dimensions. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. Utilizing cognitive principles to inform this research is applauded, but we expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance.&lt;br /&gt;
:: &#039;&#039;&#039;We have outlined the design of a module aimed at facilitating &#039;goal maintenance&#039; with analysts.  It will integrate with the brain-diagram user interface to collect input, including logging and explicit communications (e.g., what the user declares himself/herself to be doing), then predict user goals and propose or prime facilitating sub-goals.  We have added a section called &amp;quot;Task Analysis&amp;quot; that outlines how we will analyze and encode tasks and sub-tasks that prime or conflict, and will use these encodings in the predictive model used by the module.&#039;&#039;&#039; ([[User:Steven Gomez|Steven Gomez]]) &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039; &amp;quot;We have included a plan for implementing cognitive principles. Strategies (through eye-tracking and other means outlined above) will include monitoring users&#039; preferences and cognitive strategies through reaction time data and verbal reports from users. Historically, such strategies have proved successful when gathering user information, input, and comprehension; we expect the same in our implementation. We also intend to use eye-tracking and reaction time procedures in order to best gauge cognitive load to the extent that it affects user experience, as we strive to understand user adaptation and navigation strategies within our applications.&amp;quot;&#039;&#039;&#039; --Clara&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how performance is going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal needs to be much more explicit about the techniques used to derive predictions including the track records of these techniques and the ways in which these techniques may need to be enhanced to be used in particular application domains. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Another concern is methodological. We suggest that the team identify benchmark tasks that would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;Original: We have identified a set of benchmark tasks to evaluate the users’ cognitive skills when working with our interface. These tasks test how well the interface increases users’ performance in the following cognitive aspects: multi-tasking, task-switching, visual search, reasoning, decision-making and cognitive workload. More detailed description of the design of these tasks, including experiment procedure outline, involved equipments and techniques, and data analysis methods, can be found in section e.5 (Formal Testing).&#039;&#039;&#039; ([[User: Hua Guo|Hua]], Sep.22, 2011)&lt;br /&gt;
:: &#039;&#039;&#039;Revised: We will break down the brain circuitry exploration process into a set of simpler cognitive tasks through cognitive task analysis. Benchmark tests that involve solving simple problems related to brain circuitry analysis will then be designed based on the identified cognitive tasks. These benchmark tests can then be used to evaluate the users&#039; cognitive performance when working with our system in comparison with existing systems. ([[User: Hua Guo|Hua]], Sep.27, 2011)&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Broader Impacts &lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel thought that the proposal team could do more work in considering possibilities for broader societal impact and for designing more impactful outreach and education activities that would extend beyond the Brown community. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Rationale for the Recommendation &lt;br /&gt;
:&#039;&#039;Despite enthusiasm for this topic and the potential for significant impact if successful, the panel could not support the pre-proposal at this time due to a lack of coherent vision and a realizable plan of implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Put an X next to the appropriate category &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Invite &lt;br /&gt;
:&#039;&#039;Invite-if-possible &lt;br /&gt;
:&#039;&#039;Do-Not-Invite	 X &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This summary was read by/to the panel, and the panel concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Do Not Invite&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnect data such as crime and terrorism analysis.	The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;The leadership team of distinguished faculty and researchers is well qualified to conduct this research. The entire team is an appropriate blend of researchers representing all of the subordinate areas of research. &lt;br /&gt;
:&#039;&#039;The quality of prior work from all participating members is uniformly outstanding and appropriate for this endeavor. &lt;br /&gt;
:&#039;&#039;While there have been much work of a similar nature done in the past, this project would move our knowledge forward on a number of new fronts. In addition, the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception is novel. &lt;br /&gt;
:&#039;&#039;The proposal is well conceived and organized and clearly presented. &lt;br /&gt;
:&#039;&#039;With the pupil tracking device requested in the budget, there appears to be sufficient access to all the required resources necessary for this undertaking. &lt;br /&gt;
:&#039;&#039;There seems to be a wealth of available experimental facilities in the associated institutions. Some of the relevant equipment includes tiled display walls, stereo-enabled desktop displays, ultra-high-resolution Wheatstone stereoscope, haptic devices, and a virtual-reality cave to come online later this year.	&lt;br /&gt;
:&#039;&#039;The leadership plan appears to be appropriate for the small size of the project personnel. The team members have a good history of interaction on related academic activities. &lt;br /&gt;
:&#039;&#039;Other than the nominal support for traditional computing need by the computer science department at Brown, there did not appear to be any specific institutional support for this proposal. &lt;br /&gt;
:&#039;&#039;The budget is well thought out, clearly described, and justified. &lt;br /&gt;
:&#039;&#039;The collaboration amongst the faculty of Brown, Stanford, and the Rhode Island School of Design appears to be very appropriate for this project. The project would bring together cognitive scientists, visualization experts, and other domain specialists to bridge the gap between theory and practice in this area of brain research. Clearly the synergy in this group would help to insure the success of this work. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The value of the proposed work appears to be very important in this specific area of research. While this is not my specific area of expertise, I have some concern whether this project meets the over arching goals of the Expeditions in Computing Initiative. In particular, it would seem to have the potential to stimulate interest in this area for graduate students in related disciplines, but may not be very successful in drawing attention to STEM studies amongst the K-12 age group. &lt;br /&gt;
:&#039;&#039;A major focus of this proposal is the conceptualization, design, and development of a software framework for predicting user performance. It would gather information on specific models, user interfaces, and user goals and endeavor to produce probabilistic estimates of the state of users over time as predicted by the models. &lt;br /&gt;
:&#039;&#039;I did not see anything in the proposal that indicated that it would be of particular interest to youth and underrepresented groups. &lt;br /&gt;
:&#039;&#039;The single sentence on stimulating effective knowledge transfer did not seem convincing. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a good general research proposal in the area of brain research and the understanding of interconnected relationships of complex data sets. There are some novel research concepts, including the heuristic knowledge of artists and visual designers related to cognition and perception. The leadership team is well qualified to lead this endeavor and the outlook for good research results look promising. The proposed budget is in line with the proposed activities and personnel commitments. This proposal should fair well as a general unsolicited proposal for NSF. I would rank this proposal to be better than many of the other proposals of the Expeditions in Computing Initiative.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit: The main goal is to push the frontiers of data visualization, with secondary thrusts on improved learning in cognition about how we look at data. The strengths of the proposal are that the proposed tool is, as far as I know, ground-breaking in that it will actively change based on the user. Further, the researchers are well qualified, with expertise in the psychology as well as in the computer science. The major weakness was that I was not really clear how the software tool would eventually work. For instance, they talked a little about following pupils of the user - but I was not clear how they would capitalize off of that knowledge. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Value added: &lt;br /&gt;
:&#039;&#039;I think the proposed research could be complex and important enough to warrant this investment - however I could not find a clearly defined path of attack in the proposal. It fits well within the 3 program goals, with probably the greatest emphasis on the first. Intelligent data visualization tools that can react to the user will open many new doors in understanding science. It will impact and inspire future computer scientists, although I do no see a preference for underrepresented groups. Finally, it has the potential to stipulate new significant findings in science and in education. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership plan: &lt;br /&gt;
:&#039;&#039;The leadership plan seemed well thought out. They have a diverse set of researchers, each with their unique skill set. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impact: The work will be distributed to all who want to use it and will be used in classes at Brown, affecting students at all levels (either as developers or clients). I found their vision here a little short-sighted. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In their proposal entitled &#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools,&#039; the authors propose to use understandings from cognitive science, neuroscience and human - computer interaction to develop better tools for examining data. In particular, they will develop software to that will visualize neural connections in the brain. At the same time they will actively measure the client and use these data to predict what the client will want to see next. They present a compelling case that we need better visualization tools for understanding the brain.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The dominant approach to developing interactive systems is for the developers to interact with the envisioned users to gather the general requirements for the application and to construct software based on the developer&#039;s intuitions as to how the users will actually interact with the software. Depending on the sophistication of the organization, cycles of usability testing and re-design are used to refine the interface; alternately, they may simply release the software to the users and wait for the complaints or lack of sales. &lt;br /&gt;
:&#039;&#039;An alternative to this expensive process is to construct explicit models of the perceptual and decision making processes of the users and then use these models to inform the design process. Work on cognitive models such as GOMS and ACT began about three decades ago and has progressed slowly but steadily throughout the time period and there have been a number of small demonstrations that such models can, in fact, eliminate most or all of the iteration previously required. &lt;br /&gt;
:&#039;&#039;The authors of this are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing of neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. &lt;br /&gt;
:&#039;&#039;Although the focus of the recent work in cognitive models has been to develop engineering models which are capable of being used outside of the research setting, use of such models in the design of interactive systems has been slow to catch on. If nothing else, construction of the models requires a large amount of intellectual labor and, to date, impressive examples of the use of these models to justify that labor investment have been rare. This work has the potential for providing such a critical example and could be the impetus to finally move cognitive models into widespread use. &lt;br /&gt;
:&#039;&#039;Additionally, work in as complex an area as brain science will ensure that the cognitive modeling tools can handle nearly any application. &lt;br /&gt;
:&#039;&#039;Finally, if the models do result in improved tools, the research may result in new findings in the brain science field. &lt;br /&gt;
:&#039;&#039;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process. There is no guarantee that the computer scientists and psychologists doing this research will be able understand and model the cognitive processes of a brain scientist. &lt;br /&gt;
:&#039;&#039;&#039;In section d, we have included our experience of developing visualization tools for brain study, and in section f, timeline, in each phase, brain scientists will be involved in the process of model evaluation, testing, and providing feedback.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
:&#039;&#039;Value-added of funding the activity as an Expedition &lt;br /&gt;
:&#039;&#039;This work requires substantial commitment on the part of the computer scientists and psychologists to learn the brain science domain and on the part of the brain scientists for their interaction with the cognitive scientists. Such a commitment is unlikely to be obtained with smaller, more fragmented funding. Industry or venture capital are unlikely to fund this kind of research. &lt;br /&gt;
:&#039;&#039;The main knowledge transfer methods will be the mentoring of graduate students and the addition of formal courses intended to teach about interdisciplinary collaboration. In addition, the software they develop will be made available for distribution. Except through the rather limited vehicle of scholarly publication, it is not clear how the cognitive models themselves are to be made available. The authors may want to consider using their own visualization capabilities to explain the models. &lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;Only two institutions are involved in this work and the senior researchers are all located at one of the two institutions. This should minimize coordination problems. &lt;br /&gt;
:&#039;&#039;Funding for the primary brain scientist in this research is at the 50% level and his supervisor has a nominal level of funding. This is a cause for concern, given the level of commitment required to support what is, essentially, someone else&#039;s area of research. A higher level of funding would be desirable, even if a substantial amount of the funded time is spent on pure brain science research. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal includes training a number of graduate and post doctoral students; in fact, most of the funding requested is for student support. &lt;br /&gt;
:&#039;&#039;As mentioned earlier, to the extent this work results in wider acceptance and usage of cognitive models, particularly in the development of scientific software, it will accelerate the construction of interactive systems which can be used efficiently. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This work should lead signficantly wider use of cognitive modeling in interactive systems design as well as provide researchers in brain science with superior tools.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #4&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal seeks to develop new visualization techniques that will assist brain scientists with the interpretation of high-dimensional data. For this purpose, the PI will incorporate design principles and knowledge from cognitive science, neuroscience and human computer interaction. the visualization system will also capture data of scientists as they use the tool, and compare it with computational models from cognition, perception and art. The tool will also be able to predict user performance and user state over time. The tool will be released through an open-source license, and will be incorporated into two courses. The team has worked together for a number of years. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 1. What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The kind of tool envisioned in this proposal would be invaluable, not only in neuroinformatics but also on other disciplines that deal with high-dimensional, multi-scale data, from social networks to geospatial information. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Despite its laudable objective, this work is not ready for further scrutiny as a full proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. How are these principles going to be used to design a better visualization, and how are they going to be tested? I very much like the idea of using cognitive principles, but would have expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance: &amp;quot;We will use these principles to determine which tasks to make easily accessible to users and which to put in the background.&amp;quot; This is a general problem for interface design, not a solution to the problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;Reviewer 4 indicated that the proposal was not explicit about how cognitive principles would be implemented in the project. In section ___ on page __, we have elucidated how the principles listed will concretely lead to better, more optimized visualizations, as well as the studies we plan to perform to show that the visualizations are indeed efficient. ([[User:Jenna Zeigen|Jenna Zeigen]])&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. This is a wild extrapolation. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how is performance going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&#039;In section f, Timeline, phase/year 3, we have included the way to measure user performance and evaluate the cognitive models. We evaluate the models prediction against actual user data, in addition to eye-tracking, computer interaction logging, video logging and skin conductance response will be adopted, and the results and help refine the models.( --- [[User:Chen Xu|Chen Xu]])&lt;br /&gt;
:&#039;&#039;Another concern is methodological. Say that the proposal had articulated a clear plan and a reasonable set of deliverables for a new generation of visualization interfaces. Wouldn&#039;t it be better to test this interface on some benchmark problems, and see how it facilitates performance relative to a standard interface? These benchmark problems would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is very ambitious in its overall objectives. A visualization tool having the characteristics suggested in the proposal would be invaluable to brain science as well as to other scientific disciplines dealing with high-dimensional complex data, such as genomics/proteomics, geospatial analysis, network analysis, etc. However, the proposal fails to turn a high-level concept into a realizable implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Is the work of sufficient import, scale, and/or complexity to justify this type of investment? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The brain is one of the scientific frontiers for the 21st century. The proposal has the complexity and scale worthy of this type of investment, but the proposal fails to deliver a realistic plan (if any plan at all) or even specifications. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Will the work contribute to realization of the EIC program goals and is it likely to demonstrate completion of these goals? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Understanding the brain is one of our greatest scientific challenges. Unfortunately, without a clear research plan it is difficult to asses the likelihood that the proposal will be able to demonstrate completion of its overall goals. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Value of the experimental systems or shared experimental facilities proposed &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators will utilize some shared facilities in their research and will share software and data that they produce to allow further research by others. The proposed software testbed will be used across the collaborators to test models of cognition and perception in the context of HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators have worked together for a number of years, have taught classes together, and their students have attended classes from each other. No leadership or collaboration plan is discussed beyond this. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 2. What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Societal benefits of this tool would derive from its scientific merit to the extent that it would help understand the brain. Given the characteristics of this project, I wonder if other NSF funding opportunities would be more suitable, such as the FODAVA program or the interdisciplinary program in neuroscience at CISE. I also wonder whether this work should be funded instead by NIH (NIBIB, NIMH). The budget contains a request for $3,000 to cover costs of animal (mouse) care; why is this needed given that the proposal is for software development? &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;While the scientific merit of this project is most directly applicable to understanding the brain, it should be noted that the goal to “convincingly demonstrate that the employed techniques facilitate better analysis” (described in section c.1) is unique to this proposal, and would have impacts that reached into any other discipline in which vast amounts of data needs to be interpreted. --Michael&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. The tools would allow brain scientists to explore high-dimensional data, and the tool would also predict user performance and state. The proposal is inspired by principles from cognitive science, neuroscience and HCI. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. The proposal does not provide a leadership or collaboration plan.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #5&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PROPOSAL OBJECTIVES AND APPROACH &lt;br /&gt;
:&#039;&#039;The proposal develops a variety of tools for interactive analysis and reasoning for brain scientists. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT &lt;br /&gt;
:&#039;&#039;The project addresses research in three areas: human-computer interaction, cognitive modeling and the connectivity in the brain. It lists11 items that are to be developed. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is vague and has a lot of repeatability. It is not well written. It is not clear what research experiments are performed. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The team is fine. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;There is a great need for the tools by the computational neuroscience and cognitive science community. This project will develop some of these tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The details of the educational plan are not given. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal can be strengthened by focusing and making the challenges and ideas more clear.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #6&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Poor&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity of developing and improving neuroscience visual analysis tools is very important. Currently access to the genre of systems described in this proposal, especially in areas with complex sub-structure such as neuroscience, is lacking and the proposed activity could have a profound effect on the state-of-the-art in the field. The possible interplay between the user-interface experts and biomedical informatics developers is a possible strength. The available team and resources are very strong and capable with an excellent track record in the field. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;However, the proposed activities within the proposed are completely underspecified. Given the scale of the project and the challenging nature of the problems to be faced, the level of technical detail and planning presented in the proposal is insufficient and unconvincing. A great many claims are made in the proposal with no clear measurable end-points to determine how success within the project could be evaluated. Specifically, the authors claims to employ developments in concepts of cognition and perception to assist scientists reasoning. What aspects of reasoning? For which tasks? Within which discipline? If this is only related to tractography, what sort of scientific hypotheses to the applicants expect to address? Are they relating these representations of neural connectivity to studies in animals? Are they relating these analyses to other modalities of imaging data? How do they intend to reason over the complex semantics of these other experimental types? These are all glaring omissions from the proposal. The description of the cognitive science aspects of the project were marginally better specified, but I still found the details lacking. For example, the claim was made that &#039;our system will tune itself to individual work styles. How? What technical elements will the system exploit to accomplish this? In particular, the applicants must spend more time specifying the precise tasks that the system is designed to tackle before it is possible to improve or optimize performance at that task. &lt;br /&gt;
&lt;br /&gt;
::Response: We have added a section called &amp;quot;Task Analysis,&amp;quot; in which we outline a plan and preliminary results for specifying visual analysis tasks through user observation. ([[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 10:48, 22 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In particular, the statement &amp;quot;We expect the core element to evolve significantly through the five years of the project. It cannot be meaningfully defined without the data we will acquire from users, so details beyond this overview are not possible yet&amp;quot; is incredibly revealing and suggests that the applicants themselves do not have a clear idea of how they intend to solve these problems. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The Figures presented in the proposal looked confusing and uninformative, adding nothing to the argument that these systems would actually help a scientist understand the underlying data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I very much liked the proposed idea of using the system directly in courses taught at Brown and other collaborating institutions. I think that this is a sizable innovation that would be very welcome in the field and might even form the basis of evaluation metrics for the success of the system (which could address one of my previous criticisms of the project). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The activity is well integrated with training and learning. There are a number of students in the group, all of whom would have the opportunity to work with professionals from a very different focus. The interdisciplinary nature of the work coupled with the need for analysis tools in biology would be an excellent synergy to cultivate. The presence of high-end graphics equipment (such as a virtual-reality cave and haptic displays, etc.) is a plus for the project but also is a hindrance to enable the developers to release their work to a broader audience. If the system is only available to the small number of people who have access to such facilities, then the impact of the work would be lessened. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity makes no specific claims to target or support underrepresented groups explicitly. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The underspecification of the technical aspects of the project undermine the open-source distribution of the code. It is technically demanding to generate usable open source products for other people to use. Notably, browsing the co-PIs webpages, there were no easily accessible open-source software products noticeably available. &lt;br /&gt;
&lt;br /&gt;
:#&#039;&#039;&#039; Since the review, the PI&#039;s lab has published a large selection of source code from previous projects, available from its website (vis.cs.brown.edu), with provisions for multiple platforms.&lt;br /&gt;
:#&#039;&#039;&#039; We are undertaking a project to make anonymized images of healthy and abnormal brains (and related data), from our projects and those of a number of collaborators worldwide, freely available to the scientific community.&lt;br /&gt;
:#&#039;&#039;&#039; Development will be done using a publicly visible source code repository (e.g., Github), so that other scientists and the public may be able to track progress, comment on changes, provide feedback, and even contribute pieces of code to the software.&lt;br /&gt;
::&#039;&#039;&#039; ([[User:Nathan Malkin|Nathan]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Although the high-level conceptualization of this project is exciting, the way that the project was described in the proposal was massively underspecified. Technical details were lacking and some fundamental aspects of the project&#039;s conceptualization in terms of the scientific domain under study were missing. There was no timetable, and no evaluation proposed to see how progress would be measured. The authors should be careful about making high-level claims concerning the possible impact of the proposed without a more carefully constructed argument to back up the claims.&lt;br /&gt;
:&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_to_read_for_week_4.11&amp;diff=5348</id>
		<title>CS295J/Literature to read for week 4.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_to_read_for_week_4.11&amp;diff=5348"/>
		<updated>2011-09-27T16:42:17Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*[http://dl.acm.org/citation.cfm?id=1324946 Automatic cognitive load detection from speech features] Yin-2007-ACL -- [[User:Jenna Zeigen|Jenna Zeigen]]&lt;br /&gt;
: Again, discusses a way to evaluate a user&#039;s cognitive load. Again, this is relevant to the reviewers comments that there lacked a plan to evaluate how well the visualizations would jive with cognitive principles. If we are using them properly, then cognitive load levels would be low. (Owner: Jenna, Discussant: [[User:Clara Kliman-Silver|Clara Kliman-Silver]], Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org.revproxy.brown.edu/citation.cfm?id=1357101&amp;amp;CFID=49369132&amp;amp;CFTOKEN=47488919 Integrating statistics and visualization: case studies of gaining clarity during exploratory data analysis] Perer-2008-ISV (Owner: Diem Tran, Discussant: ?, Discussant: ?)&lt;br /&gt;
:This paper tempts to integrate statistics methodology and visualizations in interactive exploratory tools. The authors conduct long term case studies to evaluate this approach, which give positive results about the effectiveness of the approach. &lt;br /&gt;
:The paper is an example of how exploratory tool can be evaluated in a long-term manner.&lt;br /&gt;
&lt;br /&gt;
* [http://csjarchive.cogsci.rpi.edu/proceedings/2003/pdfs/115.pdf Cognitive Design Principles for Visualizations] Heiser-2003-CDP&lt;br /&gt;
: Claims that the design of visualizations can be informed by cognitive principles and, based on those principles, be automated. Their research has implications in any domain in which visualization in useful, including our proposed project. This is relevant to Jenna&#039;s response to reviewer #4, by showing that cognitive principles can and have been implemented in the design of visualizations. (Owner: Michael, Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1335677&amp;amp;tag=1 Visual variability analysis for goal models] Gonzales-2004-VVA&lt;br /&gt;
: This work first proposes a visual variability analysis technique, it analyses the correlation between functional goal models and softgoals(requirements variants), and tries to find the most satisfactory goal models and requirements. This can be used to illustrate  how cognitive models like goal maintainence and multi task can be designed and achieved in our project. And the paper also implemented a tool to support  the technique and this could give us some insights of how to put cognitive model into practice. (Owner: Chen, Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* Data Visualization Optimization via Computational Modeling of Perception Pineo-2012-DVO&lt;br /&gt;
: This article, to be published next year, develops paradigms for optimizing data visualization in neural-perceptual models. The algorithms aid in visualizations of different areas of the brain and attempt to offer the best projections of neural systems and neural activity. Given that these models are new and highly advanced, they reinforce and extend much of the work in the proposal (Jenna&#039;s comment). (Owner: [[User: Clara Kliman-Silver|Clara Kliman-Silver]], Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science/article/pii/S1071581901904719 Embodied models as simulated users: introduction to this special issue on using cognitive models to improve interface design] Ritter-2002-EMS&lt;br /&gt;
: A reviewer worries, &amp;quot;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process.&amp;quot; We can provide evidence that this risky part is feasible by pointing to the findings cited in this article (and others from the issue that the title refers to) of successful uses of cognitive models for developing and designing interfaces.&lt;br /&gt;
: (Owner: [[User:Nathan Malkin|Nathan]], Discussant: ?, Discussant: ?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5347</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5347"/>
		<updated>2011-09-27T16:41:13Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: Talks about efforts to make UIs &amp;quot;aware&amp;quot; of their user&#039;s ability to attend and comprehend. [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
* [http://search.bwh.harvard.edu/new/pubs/Kunar%20et%20al.%202008%20P%26P.pdf The role of memory and restricted context in repeated visual search] Kunar-2008-RMR&lt;br /&gt;
: Why don&#039;t people who have to perform repeated visual search (searching through an unchanging display for hundreds of trials) use their memory to speed up their tasks? Several experiments reported in this paper &amp;quot;show that participants choose *not* to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient.&amp;quot; However, if you restrict where in the image the target stimuli may appear, using memory becomes more efficient.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pinker.wjh.harvard.edu/articles/papers/Pinker%20A%20Theory%20of%20Graph%20Comprehension.pdf A Theory of Graph Comprehension] Steven Pinker&lt;br /&gt;
: [[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 17:52, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://adrenaline.ucsd.edu/kirsh/articles/cogscijournal/DistinguishingEpi_prag.pdf On Distinguishing Epistemic from Pragmatic Actions] Kirsh and Maglio&lt;br /&gt;
: [[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 17:52, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1241057 Galvanic skin response (GSR) as an index of cognitive load] Shi-2007-GSR -- [[User:Jenna Zeigen|Jenna Zeigen]]&lt;br /&gt;
: Discusses how GSR can be used to evaluate users&#039; stress and arousal levels while using different forms of an interface. This is relevant to the reviewers comments that there lacked a plan to evaluate how well the visualizations would jive with cognitive principles. If we are using them properly, then stress and cognitive load levels would be low.&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1324946 Automatic cognitive load detection from speech features] Yin-2007-ACL -- [[User:Jenna Zeigen|Jenna Zeigen]]&lt;br /&gt;
: Again, discusses a way to evaluate a user&#039;s cognitive load. Again, this is relevant to the reviewers comments that there lacked a plan to evaluate how well the visualizations would jive with cognitive principles. If we are using them properly, then cognitive load levels would be low.&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=HAqg6KfU1ekC&amp;amp;oi=fnd&amp;amp;pg=PP1&amp;amp;dq=cognitive+task&amp;amp;ots=KA73NQp8RN&amp;amp;sig=iQ1DJJNM7Kvpf4BDlRVrFFwarGU#v=onepage&amp;amp;q&amp;amp;f=false Cognitive Task Analysis] Schraagen et.al &lt;br /&gt;
: &amp;quot;Cognitive task analysis is a broad area consisting of tools and techniques for describing the knowledge and strategies required for task performance. The topics to be covered by this work include: general approaches to cognitive task analysis, system design, instruction, and cognitive task analysis for teams.&amp;quot; I think this book may be relevant in that some of the cognitive tasks analysis methods in the book may serve as reference when designing cognitive performance evaluation benchmark tests for the proposed brain visualization system, but I am not too sure. [[User:Hua Guo|Hua]]&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978995/p363-pan.pdf?ip=138.16.109.17&amp;amp;CFID=42891662&amp;amp;CFTOKEN=62664825&amp;amp;__acm__=1316465415_bf314f267e966fb3000292152378c16a Now Where Was I? Psychologically-Triggered Bookmarking], Pan et al., CHI &#039;11&lt;br /&gt;
: Presents an interaction paradigm for implicitly bookmarking application progress/media during user interruptions (e.g., phone ringing) using galvanic skin response (GSR) to identify interruptions, or the orienting response (OR), automatically.  The authors evaluate how well GSR works for identifying user ORs, and describe a few experiments using an audiobook listener application that creates bookmarks (stored and represented in a GUI) automatically when GSR peaks in response to controlled stimuli.  In our project, we&#039;re looking at predicting user states (or effect on task performance), so this is an example of that kind of predictive affect-tracking that has been engineered into a usability feature.  ([[User:Steven Gomez|Steven Gomez]] 17:12, 19 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/people/benko/publications/2009/Ripples%20UIST09.pdf Ripples: Utilizing Per-Contact Visualizations to Improve User Interaction with Touch Displays], Wigdor-2009-PCV&lt;br /&gt;
: Demonstrates a system for visual feedback on touch screen interfaces, designed to reduce the ambiguity that can arise in the absence of feedback. The system changes the feedback based on the task at hand and claims to provide information in an intuitive way as to how the touch screen is registering the inputs. [http://research.microsoft.com/en-us/um/people/benko/publications/2009/Ripples_UIST.wmv Video demonstration] [[User:Michael Spector|Michael Spector]] 17:30, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1979013 Synchronous interaction among hundreds: an evaluation of a conference in an avatar-based virtual environment]CHI-2011&lt;br /&gt;
:This paper presents the first in-depth evaluation of a large multi-format virtual conference. The conference took place in an avatar-based 3D virtual world with spatialized audio, and had keynote, poster and social sessions. ([[User: Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
** &amp;quot;The central claim of this article is that attention can be controlled in the same way as actions using similar learning mechanisms and by related areas of the brain.&amp;quot; &lt;br /&gt;
** &amp;quot;A computational model of attention is presented that uses habituation as well as classical and instrumental conditioning to explain a number of attentional processes.&amp;quot;&lt;br /&gt;
** &amp;quot;Computer simulations are presented that illustrates the operation of the model.&amp;quot;&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.computer.org/portal/web/csdl/doi/10.1109/IV.2003.1218045 CAEVA: Cognitive Architecture to Evaluate Visualization Applications] Juarez-Espinosa, 2003&lt;br /&gt;
: (May also be categorized in Evaluation) This paper, indicated by the author, is the first paper that uses cognitive modeling to evaluate computer application. This paper presents a rather high-level picture for a proposed cognitive architecture that is intended to be used to evaluate visualization applications. The proposed architecture has two main components: interoperability model and cognitive model. The cognitive model simulates a human interacting with the visualization system. It has two components, one on domain-dependent knowledge and one on domain-independent knowledge. The knowledge again is divided into two categories: declarative knowledge and procedural knowledge. The interoperability model describes how the cognitive model communicates with the visualization application.&lt;br /&gt;
: This paper does not provide detailed implementation or experiment with the proposed cognitive architecture. It seems like the proposed cognitive architecture is not fully implemented to the extent of fully functioning when the paper is published.&lt;br /&gt;
: ([[User:Hua Guo|Hua]], Sep. 19, 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ofai.at/research/agents/conf/at2ai6/papers/Muller.pdf Implementing a Cognitive Model in Soar and ACT-R: A Comparison] Muller&lt;br /&gt;
: This paper presents an implementation of a cognitive model in the cognitive architecture Soar. It also presents a comparison of this implementation with a previous implementation in ACT-R. The cognitive task is not quite relevant to our research; but it may help to get an idea of what it is like to implement a cognitive task in ACT-R/Soar. ([[User:Hua Guo|Hua]], Sep. 19, 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.psy.utexas.edu/homepage/group/langloislab/PDFs/Bronstad.p.2008.pdf Computational Model of Facial Attractiveness Judgements] Bronstad-2008-CMF&lt;br /&gt;
: Talk about the computational models of facial attractiveness judgements, one model uses partial least squares to identify facial images and attractive ratings, the second model uses manually derived measures of facial features as input. Results are discussed. The paper also concludes that averageness and sexual dimorphism are important for facial attractiveness judgements.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://papers.designsociety.org/visual_reasoning_model_as_an_analysis_tool_for_different_tasks_related_to_design_abilities.paper.28859.htm Visual Reasoning Model as an Analysis Tool for Different Tasks Related to Design Abilities] Kim-2009-VRM&lt;br /&gt;
: Describe how the visual reasoning model could be used to identify characteristics in various tasks related to design ability such as missing view problem task, mental synthesis task, and conceptual design task. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://www.bus.emory.edu/prietula/prietula-steve-2.pdf Examining the feasibility of a case-based reasoning model for software effort estimation] Mukhopadhyay-1992-EFC&lt;br /&gt;
: A case-based reasoning model, called Estor, was developed based on the verbal protocols of a human expert solving a set of estimation problems.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=179844 COSIMO: A cognitive simulation model of human decision making and behavior in accident management of complex plants] Cacciabue-1992-SMC&lt;br /&gt;
: &amp;quot;A cognitive simulation model (COSIMO) which simulates the behavior of an operator controlling a complex system during the management of accidents is described.&amp;quot; (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1335677&amp;amp;tag=1 Visual variability analysis for goal models] Gonzales-2004-VVA&lt;br /&gt;
: This work proposes a visual variability analysis technique and describes an implemented tool that supports it. We can investigate how goal maintaince can be put into practice. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science/article/pii/S0028393299001347 The cognitive and neuroanatomical correlates of multitasking] Burgess-2000-CNC&lt;br /&gt;
: As a reviewer points out, the pre-proposal section dealing with &amp;quot;goal maintenance&amp;quot; is vague and low on specifics. While this paper does not suggest a specific solution, we can use its findings (&amp;quot;that there are three primary constructs that support multitasking: retrospective memory, prospective memory, and planning&amp;quot;) to come up with specific techniques to help with individual correlates. &lt;br /&gt;
: ([[User:Nathan Malkin|Nathan]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978997 Sensing cognitive multitasking for a brain-based adaptive user interface] Solovey-2011-SCM&lt;br /&gt;
: Addressing the same reviewer comment, this paper reports the results of specific experiments in differentiating cognitive multitasking processes. We could cite this as something we would build on in our project, since &amp;quot;this prototype system [in the paper] serves as a platform to study interfaces that enable better task switching, interruption management, and multitasking&amp;quot;, and that&#039;s exactly what we&#039;re trying to build.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan]])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science/article/pii/S1071581901904719 Embodied models as simulated users: introduction to this special issue on using cognitive models to improve interface design] Ritter-2002-EMS&lt;br /&gt;
: A reviewer worries, &amp;quot;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process.&amp;quot; We can provide evidence that this risky part is feasible by pointing to the findings cited in this article (and others from the issue that the title refers to) of successful uses of cognitive models for developing and designing interfaces.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [https://docs.google.com/viewer?url=http%3A%2F%2Fwww.mifav.uniroma2.it%2Fiede_mk%2Fevents%2Fidea2010%2Fdoc%2FIxDEA_6_12.pdf Design and Evaluation of a Mobile Art Guide on iPod Touch]&lt;br /&gt;
: This paper evaluates the design principles behind an iPod app with respect to minimizing cognitive load and maximizing usability. Trade-offs between HCI technology and cognitive load are discussed. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 12:29, 19 September 2011 (EDT)--OWNER for Week 3)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/ft_gateway.cfm?id=1978997&amp;amp;ftid=963755&amp;amp;dwn=1&amp;amp;CFID=48793418&amp;amp;CFTOKEN=23572138 Sensing cognitive multitasking for a brain-based adaptive user interface] Solvey-SCM-2011&lt;br /&gt;
: The paper describes interface designs that are meant to modulate task switching. Given that users are likely to multitask when using applications, it is important to have an interface that provides support for different multitasking strategies and adapt to users&#039; techniques. Scientists designed experiments to develop a better understanding of human multitasking abilities when using interface technology, and apply their conclusions to a human-robot system. Potentially, this article can lay groundwork for an improved understanding of the cognitive demands that the interface we will build places on users, and how we can address them (Hua&#039;s comment). ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 22:40, 25 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
: Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
* [http://sfu.academia.edu/ChrisShaw/Papers/138757/BrainFrame_A_Knowledge_Visualization_System_for_the_Neurosciences Brainframe: A Knowledge Visualization System for the Neurosciences] Shaw-2009-KVS&lt;br /&gt;
: This paper begins with a brief overview of a problem plaguing the field of neuroscience today-- namely, that there is so much data available that it can&#039;t be synthesized in a useful way by researchers-- and the resulting negative effects that arise because of it. The authors propose BrainFrame, a &amp;quot;knowledge management system,&amp;quot; designed to streamline this massive amount of data in a way that is sensitive to the limitations of human cognition and perception. [[User:Michael Spector|Michael Spector]] 00:36, 20 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563208002318&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Mar-2009&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkWb&amp;amp;md5=c3aa656cdeff34d9e6e8b2cefac5332e/1-s2.0-S0747563208002318-main.pdf Uncovering cognitive processes: Different techniques that can contribute to cognitive load research and instruction] Van Gog-2009-UCP&lt;br /&gt;
: This article reviews modern eye tracking techniques as they aim to monitor different aspects of cognitive load. While the techniques employed are aimed at instruction, they are relevant to the project in that users will, presumably, have to learn to use the tools we build. It reviews different methodologies currently employed in eye tracking and cognitive load research. Cognitive load plays a significant role in any sort of data visualization, and therefore, information from this article should inform many parts of the proposal ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 23:23, 25 September 2011 (EDT)).&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.1013&amp;amp;rep=rep1&amp;amp;type=pdf Low-Level Components of Analytic Activity in Information Visualization], Amar, Eagan, and Stasko; InfoVis 2005&lt;br /&gt;
:&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: The brain can be considered to be a small world type network. This paper &amp;quot;present[s] a method to create scalable, interactive visualizations of small world graphs, allowing the user to inspect local clusters while maintaining a global overview of the entire structure.&amp;quot; [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1260759&amp;amp;tag=1 Human factors in visualization research ] Tory-2004-HFV ([[User:Diem Tran|Diem Tran]] 15:15, 19 September 2011 (EDT))&lt;br /&gt;
: The paper discuss three following aspects about human factors in research:1) review known methodology for doing human factors research, with specific emphasis on visualization, 2) review current human factors research in visualization to provide a basis for future investigation, 3) identify promising areas for future research.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=5728805 Data Visualization Optimization via Computational Modeling of Perception] Pineo-2012-DVO&lt;br /&gt;
: This article, to be published next year, develops paradigms for optimizing data visualization in neural-perceptual models. The algorithms aid in visualizations of different areas of the brain and attempt to offer the best projections of neural systems and neural activity. Given that these models are new and highly advanced, they reinforce and extend much of the work in the proposal (Jenna&#039;s comment). ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 23:16, 25 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?doid=1357054.1357247 Supporting the Analytical Reasoning Process in Information Visualization] Shrinivasan-2008- SAR -- ([[User:Jenna Zeigen|Jenna Zeigen]])&lt;br /&gt;
:Present a visualization framework that incorporates and supports research on analytic reasoning. Relevant to responses questioning the applicability of cognitive principles to the project.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753326.1753358&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=49369132&amp;amp;CFTOKEN=47488919 ManyNets: an interface for multiple network analysis and visualization] Freire-2010-MNI ([[User:Diem Tran|Diem Tran]] 21:24, 26 September 2011 (EDT))&lt;br /&gt;
: The paper proposes an interface to visualize multiple networks at once. &lt;br /&gt;
: This is an example of how a vis tool is developed and evaluated: through case studies and user feedback.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753326.1753716&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=49369132&amp;amp;CFTOKEN=47488919 Useful junk?: the effects of visual embellishment on comprehension and memorability of charts] &lt;br /&gt;
: The authors explore the effectiveness of embellished graphs versus plain ones by measuring users&#039; memory. Results show that embellished graphs used in the study are more effective.&lt;br /&gt;
: This is an example of how memory is measured. This is a method to evaluate user performance for our analysis tool. [[User:Diem Tran|Diem Tran]] 21:20, 26 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]], Sep.12, 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]], Sep.12, 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=989863.989880&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=43496808&amp;amp;CFTOKEN=17350291 The Challenge of Information Visualization Evaluation] Plaisant-2004-CVE ([[User:Diem Tran|Diem Tran]] 15:15, 19 September 2011 (EDT))&lt;br /&gt;
: The paper describes current evaluation practices being used in visualization research and challenges researchers are facing. It then suggests possible steps to improve visualization evaluation.&lt;br /&gt;
&lt;br /&gt;
* [http://ccom.unh.edu/vislab/PDFs/PineoWareTAP.pdf Data Visualization Optimization Computational Modeling of Perception] Ware-2011-TVCG &lt;br /&gt;
: This paper presents a computational model of human vision that can be used to optimize and evaluate visualization systems. I think this is a great example of the application of cognitive modeling on visualization. ([[User:Hua Guo|Hua]], Sep.19, 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/590t258p83356w78/ Learning Factors Analysis – A General Method for Cognitive Model Evaluation and Improvement] Cen-2006-LFA&lt;br /&gt;
: Propose a semi-automated method for improving a cognitive model called Learning Factors Analysis that combines a statistical model, human expertise and a combinatorial search. The method is used to evaluate an existing cognitive model and to generate and evaluate alternative models. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1966738 Cognitive work analysis and the design of user interfaces] Westrenen-2011-CWA&lt;br /&gt;
: Outlines &amp;quot;cognitive work analysis,&amp;quot; a synthesis of two known theoretical tools: cognitive task analysis (CTA) and hierarchical task analysis (HTA). Provides a good example of cognitive metrics informing user interface design. Relevant to Hua&#039;s response in the panel summary. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://csjarchive.cogsci.rpi.edu/proceedings/2003/pdfs/115.pdf Cognitive Design Principles for Visualizations] Heiser-2003-CDP&lt;br /&gt;
: Claims that the design of visualizations can be informed by cognitive principles and, based on those principles, be automated. Their research has implications in any domain in which visualization in useful, including our proposed project. This is relevant to Jenna&#039;s response to reviewer #4, by showing that cognitive principles can and have been implemented in the design of visualizations. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.270&amp;amp;rep=rep1&amp;amp;type=pdf Cognitive Dimensions of Notations], Green, 1989&lt;br /&gt;
: Green gives a framework and vocabulary to discuss usability dimensions for design.  Green suggests at the end that the framework can be tied to models of user activity, and that a given model might dictate system requirements in these dimensions.  This aligns with our project in the sense that we want a principled way of assessing cognitive load, both in the tool-design process and in building user-centered features.  Relevant to (mine, Hua&#039;s) comments about profiling user cognitive abilities with benchmark tasks.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.7804&amp;amp;rep=rep1&amp;amp;type=pdf A Cognitive Dimensions Questionnaire Optimised for Users], Blackwell and Green, 2000&lt;br /&gt;
: This paper discusses an interesting follow-up to Green&#039;s &amp;quot;cognitive dimensions&amp;quot; framework.  One application of cognitive dimensions is to build general questionnaires that allow users to evaluate systems and environments in these &#039;dimensions&#039;, rather than having an HCI expert guide the evaluation.  Relevant to (mine, Hua&#039;s) comments about profiling user cognitive abilities with benchmark tasks.  A questionnaire used in the development &#039;spiral&#039; of the brain application would be a principled way of evaluating &#039;cognitively optimized&#039; tools.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1357101&amp;amp;CFID=49369132&amp;amp;CFTOKEN=47488919 Integrating statistics and visualization: case studies of gaining clarity during exploratory data analysis] Perer-2008-ISV ( [[User:Diem Tran|Diem Tran]] 21:20, 26 September 2011 (EDT))&lt;br /&gt;
: This paper tempts to integrate statistics methodology and visualizations in interactive exploratory tools. The authors conduct long term case studies to evaluate this approach, which give positive results about the effectiveness of the approach.&lt;br /&gt;
: The paper is an example of how exploratory tool can be evaluated in a long-term manner.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=859748 Evaluating Visualizations based on the Performed Task] Juarez-2000-EVPT&lt;br /&gt;
: Another paper demonstrating evaluation methods of information visualization software. Could be useful for our evaluation purposes. Also shows applicability to other fields of research. (Michael)&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1035090&amp;amp;bnc=1 Sustainable software development]&lt;br /&gt;
: This brief paper describes the shortcomings of typical usability analysis when evaluated against a much different culture. It describes how the software development process and specifically HCI research has focused on values developed through Western societal values and it illustrates how this does not transfer to African nations, specifically Namibia. They ran a simple usability study and found that results of the questionnaires were contradictory to observed evidence. Using this examples and ideas taken from this article, we could develop a plan to help improve the sustainability of this software in a more heterogeneous community. ([[User:Stephen Brawner|Stephen Brawner]], 26 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://eprints.ecs.soton.ac.uk/13925/1/ICVC_HRM_SN_AHM_final_paper1b.pdf From Open Source to long-term sustainability: Review of Business Models and Case studies]&lt;br /&gt;
: Provides several business models of sustainable open source development. It provides case studies for each of their business models. This could easily be relevant if the researchers are looking to create long-term tools that need to be supported long after the research funding ends. &lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science/article/pii/S0048733302000033 ‘Libre’ software: turning fads into institutions?] What they term as &#039;Libre&#039; software is open source software that is free to modify and re-distribute and is used here for its more relevant French definition. The paper discusses reasons why communities develop around this software and why they choose to contribute when there are few economic theories that could actually account for this. If certain key ideas are taken from this paper, we can format them to help counter reviewers perceived lack of a software sustainability plan. ([[User:Stephen Brawner|Stephen Brawner]], 26 September 2011)&lt;br /&gt;
&lt;br /&gt;
== Visual Analysis ==&lt;br /&gt;
* [http://www.limsi.fr/Individu/tarroux/enseignement/old/FraundorferBischof-wapcv2003.pdf Utilizing Saliency Operators for Image Matching] Fraundorfer-2003-USO&lt;br /&gt;
: This paper from the field of computer vision describes mathematical and geometric techniques for identifying regions of saliency and &amp;quot;sub-saliency&amp;quot; in images.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/l81xhl05464u1473/ Contextual Priming for Object Detection] Torralba-2003-CPO&lt;br /&gt;
: This is another vision paper; it describes how we can use the context in which a region of an image appears to help with object detection in that region.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1888&amp;amp;rep=rep1&amp;amp;type=pdf Knowledge Visualization: Towards a New Discipline and Its Fields of Applications] Describes the process of visualing knowledge as opposed to simply information. This paper proposes the idea and presents recent research on the subject. Several examples are presented of different types of visualizations. They begin to make the case about non-2d graphical visualization and interactive methods. ([[User:Stephen Brawner|Stephen Brawner]], 21 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.rutgers.edu/~asantell/thesis.pdf The Art of Seeing: Visual Perception in Design and Evaluation of Non-Photorealistic Rendering] A Ph.D thesis describing methods for presenting non-photorealistic images concisely for the purpose of presenting relevant information. It corresponds nicely to our discussion about how to simplify the information well. ([[User:Stephen Brawner|Stephen Brawner]], 21 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/l6dhy9wpvkdfv40l/ Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour ]([[User: Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
*[http://books.nips.cc/papers/files/nips20/NIPS2007_1074.pdf Predicting human gaze using low-level saliency combined with face detection]NIPS-2007&lt;br /&gt;
: Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts.This paper here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on([[User: Wenjun Wang|Wenjun Wang]])&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_3.11&amp;diff=5269</id>
		<title>CS295J/Literature class 3.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_3.11&amp;diff=5269"/>
		<updated>2011-09-22T16:40:07Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://delivery.acm.org/10.1145/1980000/1978995/p363-pan.pdf?ip=138.16.109.17&amp;amp;CFID=42891662&amp;amp;CFTOKEN=62664825&amp;amp;__acm__=1316465415_bf314f267e966fb3000292152378c16a Now Where Was I? Psychologically-Triggered Bookmarking], Pan et al., CHI &#039;11&lt;br /&gt;
: Presents an interaction paradigm for implicitly bookmarking application progress/media during user interruptions (e.g., phone ringing) using galvanic skin response (GSR) to identify interruptions, or the orienting response (OR), automatically.  The authors evaluate how well GSR works for identifying user ORs, and describe a few experiments using an audiobook listener application that creates bookmarks (stored and represented in a GUI) automatically when GSR peaks in response to controlled stimuli.  In our project, we&#039;re looking at predicting user states (or effect on task performance), so this is an example of that kind of predictive affect-tracking that has been engineered into a usability feature.  (Owner: [[User:Steven Gomez|Steven Gomez]], Discussant: Wenjun Wang, Discussant: Nathan)&lt;br /&gt;
&lt;br /&gt;
* [http://ccom.unh.edu/vislab/PDFs/PineoWareTAP.pdf Data Visualization Optimization Computational Modeling of Perception] Ware-2011-TVCG &lt;br /&gt;
: In this paper, the author applies perceptual theory to visualization through computational modeling. A computational model has been implemented to generate 2D flow visualization, and the visualization is iteratively optimized. The model is an artificial neural network. It&#039;s first layer corresponds to the visualization image, the second layer to the human retina, and the third to the V1 region in the human vision system. The optimization criteria is that &amp;quot; the user&#039;s perception of the visual variables should closely matches to the data being expressed&amp;quot;. The paper shows that this kind of computational models have three advantages: 1. It directly links improvement of visualization quality to advancement in understanding of human vision system; 2. It enables discovery of good visualization that the designer may not come up with otherwise; 3. The evaluation of the generated visualization can be readily adapted from the equations used in the optimization process, and can be automated. I think this is a unique and quantitative example of how cognition can be applied to visualization. (Owner: [[User:Hua Guo|Hua]], Discussant: Diem Tran, Discussant: [[User:Jenna Zeigen|Jenna Zeigen]] Sep.19, 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1979013 Synchronous interaction among hundreds: an evaluation of a conference in an avatar-based virtual environment]CHI-2011&lt;br /&gt;
:This paper presents the first in-depth evaluation of a large multi-format virtual conference. The conference took place in an avatar-based 3D virtual world with spatialized audio, and had keynote, poster and social sessions. (Owner:[[User: Wenjun Wang|Wenjun Wang]],Discussant:Chen, Discussant:?)&lt;br /&gt;
&lt;br /&gt;
* [http://sfu.academia.edu/ChrisShaw/Papers/138757/BrainFrame_A_Knowledge_Visualization_System_for_the_Neurosciences Brainframe: A Knowledge Visualization System for the Neurosciences] Shaw-2009-KVS&lt;br /&gt;
: This paper begins with a brief overview of a problem plaguing the field of neuroscience today– namely, that there is so much data available that it can&#039;t be synthesized in a useful way by researchers– and the negative effects that arise as a result. The authors propose BrainFrame, a &amp;quot;knowledge management system,&amp;quot; designed to streamline this massive amount of data into a formal set of concepts that can be organized quantitatively. Although BrainFrame is not explicitly a visualization system, this paper is highly relevant to our course on several levels. It shares a common focus of subject matter in neuroscience data management, it emphasizes ease of use based on human cognitive limitations, and it provides several guidelines for the construction of other similar knowledge management systems, based on existing examples. (Owner: Michael Spector, Discussant: Diem Tran, Discussant: Clara Kliman-Silver)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=989863.989880&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=43496808&amp;amp;CFTOKEN=17350291 The Challenge of Information Visualization Evaluation] Plaisant-2004-CVE &lt;br /&gt;
: The paper describes current evaluation practices being used in visualization research and challenges researchers are facing: &lt;br /&gt;
# Most evaluations are done in lab settings, which may be different from real environments. &lt;br /&gt;
# Evaluations usually include only simple tasks, while users in different cases may use tasks with various complexity.&lt;br /&gt;
# Users may need to look at the data they examining from different prespectives, which makes it difficult to evaluate success of the tools. &lt;br /&gt;
# Current evaluations cannot quantify the risks related to errors/failure of the tools, or benefits and chances to explore new phenomenom of data.&lt;br /&gt;
: The paper then describes a contest run to examine the use of benchmark datasets and tasks, the outcome shows that it is very difficult to fully evaluate an analysis tools given the benchmark, due to the complexity of the problem.&lt;br /&gt;
: The paper ends with some case studies of successful analysis tools.&lt;br /&gt;
:(Owner: Diem Tran, Discussant: [[User:Steven Gomez | Steven Gomez]], Discussant: Chen)&lt;br /&gt;
&lt;br /&gt;
* [https://docs.google.com/viewer?url=http%3A%2F%2Fwww.mifav.uniroma2.it%2Fiede_mk%2Fevents%2Fidea2010%2Fdoc%2FIxDEA_6_12.pdf Design and Evaluation of a Mobile Art Guide on iPod Touch]&lt;br /&gt;
: This paper evaluates the design principles behind an iPod app with respect to minimizing cognitive load and maximizing usability, without sacrificing the artistic and historical information the app is supposed to provide. With these constraints in mind, the authors identify navigation techniques, graphics, and balance of information and technical architecture as key aspects of their design. Trade-offs between HCI technology and cognitive load are discussed, particularly when striking a balance between graphics and text. (Owner: [[User: Clara Kliman-Silver|Clara Kliman-Silver]], Discussant: [[User:Jenna Zeigen|Jenna Zeigen]], Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://adrenaline.ucsd.edu/kirsh/articles/cogscijournal/DistinguishingEpi_prag.pdf On Distinguishing Epistemic from Pragmatic Action] Kirsh and Maglio&lt;br /&gt;
: This is an interesting paper for the theory of interaction. A common assumption in visualization user models is that the purpose of interaction is always to affect the state of a program. However, the authors make the argument that some interactions make more sense as techniques to make reasoning more efficient. They study players in a game of Tetris and find a high number of interactions that are not goal-directed, but which might orient the game pieces to help the user envision where they fit. Since visualization is all about understanding a data space, and goals are often open-ended, this kind of &amp;quot;epistemic action&amp;quot; may be an important part of our user models. (Owner: Caroline Ziemkiewicz, Discussion: [[User:Hua Guo|Hua]], Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
: Summary&lt;br /&gt;
:* &amp;quot;The central claim of this article is that attention can be controlled in the same way as actions using similar learning mechanisms and by related areas of the brain.&amp;quot; &lt;br /&gt;
:* &amp;quot;A computational model of attention is presented that uses habituation as well as classical and instrumental conditioning to explain a number of attentional processes.&amp;quot;&lt;br /&gt;
:* &amp;quot;Computer simulations are presented that illustrates the operation of the model.&amp;quot;&lt;br /&gt;
: Relevance&lt;br /&gt;
:* Model of attention may have some predictive capabilities (i.e., might be able to use it to predict people&#039;s attention)&lt;br /&gt;
:* Can use conditioning methods to train users to focus attention on important parts of the visualization (assuming we know what these important parts are)&lt;br /&gt;
: (Owner: [[User:Nathan Malkin|Nathan Malkin]], Discussant: [[User:Hua Guo|Hua]], Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.psy.utexas.edu/homepage/group/langloislab/PDFs/Bronstad.p.2008.pdf Computational Model of Facial Attractiveness Judgements] Bronstad-2008-CMF&lt;br /&gt;
: Talk about the computational models of facial attractiveness judgements, one model uses partial least squares to identify facial images and attractive ratings, the second model uses manually derived measures of facial features as input. Results are discussed. The paper also concludes that averageness and sexual dimorphism are important for facial attractiveness judgements.(Owner: Chen, Discussant: Clara, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
:The brain can be considered to be a small world type network, and such networks are considered difficult to effectively visualize because of their high connectivity. This paper discusses specific ways the authors have found to make such visualizations detailed enough to convey enough information but abstracted and simple enough that they are easy to interact with. Specifically, they suggest use of a fish-eye distortions, different focuses based on distances to the section being examined, clusters of spherical nodes, and edge drawing conditional on length. (Owner: [[User:Jenna Zeigen|Jenna Zeigen]]; Discussant: [[User:Steven Gomez | Steven Gomez]]; Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1888&amp;amp;rep=rep1&amp;amp;type=pdf Knowledge Visualization: Towards a New Discipline and Its Fields of Applications] Describes the process of visualing knowledge as opposed to simply information. This paper proposes the idea and presents recent research on the subject. Several examples are presented of different types of visualizations. They begin to make the case about non-2d graphical visualization and interactive methods. (Owner: [[User:Stephen Brawner|Stephen Brawner]], 21 September 2011, Discussant:? Discussant?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5268</id>
		<title>CS295J/Review Response</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Review_Response&amp;diff=5268"/>
		<updated>2011-09-22T16:36:25Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Full Proposal (SI2-SSI) Review and Response ==&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;06/14/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Office of CyberInfrastructure&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Manish Parashar&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8970&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mparasha@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number: 1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary: &lt;br /&gt;
:&#039;&#039;Panel Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT (INCLUDING POTENTIAL TRANSFORMATIVE ASPECTS): &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel noted that the proposal was clear and comprehensive, and addressed the criteria for SI2 proposals as well as the general NSF criteria. The proposed software would enhance both visualization of data on brain function and the knowledge discovery process of researchers in this area. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel&#039;s discussion focused on the sustainability issue beyond the end of the project support timeframe. While a section of the proposal talks about the Outreach, Education, and Sustainability Plan, the community outreach and sustainability aspects are treated somewhat cursorily. These two issues are closely related: without community support, the software is unlikely to be sustainable in the long run. On the other hand, the proposers appear to be well known in their field, which may enhance community uptake. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal was perhaps over-ambitious; aspects of the evaluation (for example, eye tracking) will create large amounts of data that will require correspondingly intensive data analysis. However, the panel felt that even if the project did not accomplish every detail of the proposal, it would still be highly worthwhile. Similarly, while some aspects of the project might be seen as risky, a certain amount of risk is acceptable in NSF proposals, or even expected. Moreover, any risk is mitigated by the qualifications of the PIs, as exemplified by their excellent track record. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;BROADER IMPACTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;POSITIVE ASPECTS OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel believes that the framework of this project would be portable to such other fields as gene regulation and the analysis of other complex networks. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SHORTCOMINGS AND WEAKNESSES OF THE PROPOSAL AND PROPOSED RESEARCH: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;ADDITIONAL REVIEW CRITERIA: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed software would primarily be of use to brain researchers. Other fields would be impacted indirectly, in the sense that if this way of building software packages combining visualization with support for hypothesis testing and tracking of analyses succeeds, it might provide a pattern for those fields to follow in their own software development processes. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposers have laid out a detailed five year plan. One panelist questioned whether sufficient attention had been paid to issues of sustainability, in particular there was no mention made of plans for software support beyond the end of the five year plan. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SYNTHESIS COMMENTS: &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel agreed that this was a highly competitive proposal. In particular, the proposal attacks a large but tractable problem, and the team&#039;s wide and deep expertise gives the project a high probability of success. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;========================== &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PANEL RECOMMENDATION (CHECK ONE): &lt;br /&gt;
:&#039;&#039;[X] Competitive (C) &lt;br /&gt;
:&#039;&#039;[ ] Not Competitive (NC) &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This panel summary was read by panelists who participated in the discussion of this proposal, and they concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Competitive&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SSI proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A five year project to develop software for visualization and analysis of brain circuitry by working with brain researchers to analyze their cognitive processes that need to be supported. The software is to help link the visualization workflow to a &amp;quot;decisional&amp;quot; workflow, supporting &amp;quot;reasoning and analysis at a high level, rather than just displaying data.&amp;quot; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The development phase will include studies of user interaction at both low level (eye tracking, mouse click logs) and high level (decision making). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposers see this as sort of a prototype of the way software could be developed to support other scientific endeavors; this effect appears to constitute most of the Broader Impact (I wouldn&#039;t count benefit to &amp;quot;the entire brain science research community&amp;quot;, mentioned in the BI statement, as broader impact). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;While this work is mostly outside my domain of expertise, it looks like a well chosen problem and an interdisciplinary effort. In particular, the team appears to have the broad range of people they would need to pull this off, from experts from the target audience (who suggested the project, and is listed as a co-PI), to cognitive scientists and computer scientists. In addition to the Intellectual and Broader Impact criteria, the specific &amp;quot;additional criteria&amp;quot; listed in the Program Solicitation have been explicitly and (to the extent that I can tell) well addressed. The required supplemental documents address the required points as well. One quibble I have in the Management and Coordination Plan is the statement that &amp;quot;The Stanford researchers will visit Brown if face-to-face interactions become necessary.&amp;quot; While electronic communication allows collaboration in ways that would not have been possible before, I think it&#039;s not a question of whether face-to-face will be necessary, but how often. Fortunately, this appears to have been built into the travel budget.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The PI&#039;s of this project are proposing to develop, test, and deploy software tools for scientific study of brain circuits. The project will focus on building a cyber infrastructure software system that is intended to improve the speed at which those doing brain research are able to complete their data analysis and it will advance the understanding of human cognition. This project has potential to have impact in the way researchers in the field collect and analyze data by providing an a rich set of cyber infrastructure tools for use in studying and modeling brain circuits. The intellectual merit of the project is very high as it has the potential to greatly reduce the time required to collect and analyze data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project will provide an enhanced set of software tools to researchers in several areas that are conducting research that is related to the human brain. Areas of research that the cyber infrastructure software can be used in included: gene regulation, protein signaling and even crime and terrorism analysis and all have the potential to benefit. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This project is focused on an area of research that spans several disciplines that will be able to utilize the cyber infrastructure software that will be developed. The PI&#039;s have a proven record of accomplishment in prior research projects. The project has intellectual merit and will have a broad impact by providing an enhanced set of software tools that will facilitate the efforts of researchers doing work related to studying and modeling the brain.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity will allow brain scientists to visualize brain functions more easily and in much more detail than in the past.Network models, coupled with sophisticated methods for dimensionality reduction, promise to offer unique insights into the workings of the human brain. Moreover, the proposed visualization tool will go through rigorous evaluation that will allow its constant improvement. The proposers form a very strong group of well-established researchers in brain science and data visualization, offering a unique collaboration. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Researchers from other disciplines, e.g., who study gene regulation, protein signaling, or perform crime and terrorism analysis, etc., have the potential to be benefited by the proposed software. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This unique collaboration between brain scientists and data visualization scientists promises to offer tremendous benefits to the scientific community. The software that will be developed, will allow researchers to understand the signal pathways in the human brain in more detail than ever before. The software will be analyzed through a rigorous process, using models of cognition.&lt;br /&gt;
:&#039;&#039;Review #4&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1047832&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Software Institutes&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;SI2-SSI: Collaborative Research: Cognition-aware Visual Analytics of Brain Circuits&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The goal of this proposal is to develop, test, and deploy interactive visualization tools for scientific study of brain circuits. The tools will help brain researchers view brain circuits at multiple scales and perform sophisticated analysis of research hypotheses. The team members have a decade of experience developing scientific visualization tools for scientific users and consist of experts in cognitive science, neuroscience, computer science, and visual design. They are well qualified to conduct the project. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is an interdisciplinary project and the target user community is brain scientists. The tools will be made available to the public and are expected to benefit the entire brain science research community as well as other disciplines studying linked types of data. The tools can also be used in classes to help students understand connectivity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;As the proposal is for SI2-SSI, I would like to know more on what the team plans to do to ensure the sustainability of the software and develop open-source community support. It is also unclear what the team will do to integrate diversity into the proposed activity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I can see the proposed work will be valuable for brain scientists to study connectivity and dynamics of neural circuits in intact brain as existing systems all have limitations and cannot satisfy the needs of brain scientists as discussed in the proposal. Other scientific domains may need similar tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal focuses on an interesting problem for which it also provides a novel solution. Therefore, I think this is a quality proposal and worthy of support.&lt;br /&gt;
&lt;br /&gt;
== Pre-proposal (Expeditions) Reviews and Response ==&lt;br /&gt;
:&#039;&#039;Proposal Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Received by NSF:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;09/10/10&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Laidlaw&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Co-PI(s):&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;David Badre&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Steven Sloman&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This Proposal has been Electronically Signed by the Authorized Organizational Representative (AOR).&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program Information&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Division:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Division of Computer and Communication Foundations&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Program Officer:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Mitra Basu&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Telephone:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;(703) 292-8649&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PO Email:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;mbasu@nsf.gov&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number: 1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary: &lt;br /&gt;
:&#039;&#039;Panel Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Summary for Expeditions Preliminary Proposals &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Preliminary Proposal Summary (Vision/Goals of the Expedition) &lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnected data such as crime and terrorism analysis. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit &lt;br /&gt;
:&#039;&#039;The authors of this proposal are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. However the proposal does not elaborate on how the PIs would use heuristic knowledge of artists and visual designers; this is only mentioned in the introduction but never developed in the proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Overall the panel had difficulty coming to a common understanding of the proposal contents. The range of reviews mirrored the range of what people had read into the proposal and their enthusiasm for the proposal topic. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;That said, the proposal failed to convince the reviewers along a number of dimensions. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. Utilizing cognitive principles to inform this research is applauded, but we expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance.&lt;br /&gt;
:: &#039;&#039;&#039;We have included the description of a proposed module aimed at facilitating &#039;goal maintenance&#039; with analysts.  It will use observations about an individual&#039;s interactions, including logging and explicit communications (e.g., what the user declares himself/herself to be doing), to predict user goals and propose and/or prime facilitating sub-goals.  These sub-goals may be learned by analyzing multi-user activity with the application.  Furthermore, the module will be designed to adapt to individual users&#039; workflows, based on predictive models we will develop to characterize the multitasking abilities of individual users using eye-tracking and cognitive/memory puzzles.&#039;&#039;&#039; ([[User:Steven Gomez|Steven Gomez]] 19:06, 21 September 2011 (EDT))&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how performance is going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal needs to be much more explicit about the techniques used to derive predictions including the track records of these techniques and the ways in which these techniques may need to be enhanced to be used in particular application domains. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Another concern is methodological. We suggest that the team identify benchmark tasks that would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impacts &lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The panel thought that the proposal team could do more work in considering possibilities for broader societal impact and for designing more impactful outreach and education activities that would extend beyond the Brown community. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Rationale for the Recommendation &lt;br /&gt;
:&#039;&#039;Despite enthusiasm for this topic and the potential for significant impact if successful, the panel could not support the pre-proposal at this time due to a lack of coherent vision and a realizable plan of implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Put an X next to the appropriate category &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Invite &lt;br /&gt;
:&#039;&#039;Invite-if-possible &lt;br /&gt;
:&#039;&#039;Do-Not-Invite	 X &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This summary was read by/to the panel, and the panel concurred that the summary accurately reflects the panel discussion.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Panel Recommendation: Do Not Invite&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #1&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Very Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal is focused on blending the areas of cognitive science, neuroscience, and HCI to develop new tools that would help in understanding the interrelationships in complex interconnected data sets. One novel aspect of this proposal is the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception. The work would focus on brain science activities but would likely be applicable to many other areas that have complex interconnect data such as crime and terrorism analysis.	The strength of this proposal is in the benefits it would bring to the intersections of cognitive science, neuroscience, and HCI. &lt;br /&gt;
:&#039;&#039;The leadership team of distinguished faculty and researchers is well qualified to conduct this research. The entire team is an appropriate blend of researchers representing all of the subordinate areas of research. &lt;br /&gt;
:&#039;&#039;The quality of prior work from all participating members is uniformly outstanding and appropriate for this endeavor. &lt;br /&gt;
:&#039;&#039;While there have been much work of a similar nature done in the past, this project would move our knowledge forward on a number of new fronts. In addition, the inclusion of heuristic knowledge of artists and visual designers related to cognition and perception is novel. &lt;br /&gt;
:&#039;&#039;The proposal is well conceived and organized and clearly presented. &lt;br /&gt;
:&#039;&#039;With the pupil tracking device requested in the budget, there appears to be sufficient access to all the required resources necessary for this undertaking. &lt;br /&gt;
:&#039;&#039;There seems to be a wealth of available experimental facilities in the associated institutions. Some of the relevant equipment includes tiled display walls, stereo-enabled desktop displays, ultra-high-resolution Wheatstone stereoscope, haptic devices, and a virtual-reality cave to come online later this year.	&lt;br /&gt;
:&#039;&#039;The leadership plan appears to be appropriate for the small size of the project personnel. The team members have a good history of interaction on related academic activities. &lt;br /&gt;
:&#039;&#039;Other than the nominal support for traditional computing need by the computer science department at Brown, there did not appear to be any specific institutional support for this proposal. &lt;br /&gt;
:&#039;&#039;The budget is well thought out, clearly described, and justified. &lt;br /&gt;
:&#039;&#039;The collaboration amongst the faculty of Brown, Stanford, and the Rhode Island School of Design appears to be very appropriate for this project. The project would bring together cognitive scientists, visualization experts, and other domain specialists to bridge the gap between theory and practice in this area of brain research. Clearly the synergy in this group would help to insure the success of this work. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The value of the proposed work appears to be very important in this specific area of research. While this is not my specific area of expertise, I have some concern whether this project meets the over arching goals of the Expeditions in Computing Initiative. In particular, it would seem to have the potential to stimulate interest in this area for graduate students in related disciplines, but may not be very successful in drawing attention to STEM studies amongst the K-12 age group. &lt;br /&gt;
:&#039;&#039;A major focus of this proposal is the conceptualization, design, and development of a software framework for predicting user performance. It would gather information on specific models, user interfaces, and user goals and endeavor to produce probabilistic estimates of the state of users over time as predicted by the models. &lt;br /&gt;
:&#039;&#039;I did not see anything in the proposal that indicated that it would be of particular interest to youth and underrepresented groups. &lt;br /&gt;
:&#039;&#039;The single sentence on stimulating effective knowledge transfer did not seem convincing. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a good general research proposal in the area of brain research and the understanding of interconnected relationships of complex data sets. There are some novel research concepts, including the heuristic knowledge of artists and visual designers related to cognition and perception. The leadership team is well qualified to lead this endeavor and the outlook for good research results look promising. The proposed budget is in line with the proposed activities and personnel commitments. This proposal should fair well as a general unsolicited proposal for NSF. I would rank this proposal to be better than many of the other proposals of the Expeditions in Computing Initiative.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #2&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Good&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Intellectual Merit: The main goal is to push the frontiers of data visualization, with secondary thrusts on improved learning in cognition about how we look at data. The strengths of the proposal are that the proposed tool is, as far as I know, ground-breaking in that it will actively change based on the user. Further, the researchers are well qualified, with expertise in the psychology as well as in the computer science. The major weakness was that I was not really clear how the software tool would eventually work. For instance, they talked a little about following pupils of the user - but I was not clear how they would capitalize off of that knowledge. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Value added: &lt;br /&gt;
:&#039;&#039;I think the proposed research could be complex and important enough to warrant this investment - however I could not find a clearly defined path of attack in the proposal. It fits well within the 3 program goals, with probably the greatest emphasis on the first. Intelligent data visualization tools that can react to the user will open many new doors in understanding science. It will impact and inspire future computer scientists, although I do no see a preference for underrepresented groups. Finally, it has the potential to stipulate new significant findings in science and in education. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership plan: &lt;br /&gt;
:&#039;&#039;The leadership plan seemed well thought out. They have a diverse set of researchers, each with their unique skill set. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Broader Impact: The work will be distributed to all who want to use it and will be used in classes at Brown, affecting students at all levels (either as developers or clients). I found their vision here a little short-sighted. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In their proposal entitled &#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools,&#039; the authors propose to use understandings from cognitive science, neuroscience and human - computer interaction to develop better tools for examining data. In particular, they will develop software to that will visualize neural connections in the brain. At the same time they will actively measure the client and use these data to predict what the client will want to see next. They present a compelling case that we need better visualization tools for understanding the brain.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #3&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Excellent&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The dominant approach to developing interactive systems is for the developers to interact with the envisioned users to gather the general requirements for the application and to construct software based on the developer&#039;s intuitions as to how the users will actually interact with the software. Depending on the sophistication of the organization, cycles of usability testing and re-design are used to refine the interface; alternately, they may simply release the software to the users and wait for the complaints or lack of sales. &lt;br /&gt;
:&#039;&#039;An alternative to this expensive process is to construct explicit models of the perceptual and decision making processes of the users and then use these models to inform the design process. Work on cognitive models such as GOMS and ACT began about three decades ago and has progressed slowly but steadily throughout the time period and there have been a number of small demonstrations that such models can, in fact, eliminate most or all of the iteration previously required. &lt;br /&gt;
:&#039;&#039;The authors of this are planning to build cognitive models of brain scientists&#039; perception and reasoning in performing their research. They then intend to use these models to develop new and improved interaction and visualization techniques for tracing of neural pathways, with the expectation that use of the cognitive models will reduce the trial and error required to produce effective tools. Additionally, the cognitive models may even result in the invention of new visualizations through a more systematic exploration of the design space. &lt;br /&gt;
:&#039;&#039;Although the focus of the recent work in cognitive models has been to develop engineering models which are capable of being used outside of the research setting, use of such models in the design of interactive systems has been slow to catch on. If nothing else, construction of the models requires a large amount of intellectual labor and, to date, impressive examples of the use of these models to justify that labor investment have been rare. This work has the potential for providing such a critical example and could be the impetus to finally move cognitive models into widespread use. &lt;br /&gt;
:&#039;&#039;Additionally, work in as complex an area as brain science will ensure that the cognitive modeling tools can handle nearly any application. &lt;br /&gt;
:&#039;&#039;Finally, if the models do result in improved tools, the research may result in new findings in the brain science field. &lt;br /&gt;
:&#039;&#039;The primary risk in this proposal is that the cognitive models which can be created are too weak to support the design process. There is no guarantee that the computer scientists and psychologists doing this research will be able understand and model the cognitive processes of a brain scientist. &lt;br /&gt;
:&#039;&#039;Value-added of funding the activity as an Expedition &lt;br /&gt;
:&#039;&#039;This work requires substantial commitment on the part of the computer scientists and psychologists to learn the brain science domain and on the part of the brain scientists for their interaction with the cognitive scientists. Such a commitment is unlikely to be obtained with smaller, more fragmented funding. Industry or venture capital are unlikely to fund this kind of research. &lt;br /&gt;
:&#039;&#039;The main knowledge transfer methods will be the mentoring of graduate students and the addition of formal courses intended to teach about interdisciplinary collaboration. In addition, the software they develop will be made available for distribution. Except through the rather limited vehicle of scholarly publication, it is not clear how the cognitive models themselves are to be made available. The authors may want to consider using their own visualization capabilities to explain the models. &lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;Only two institutions are involved in this work and the senior researchers are all located at one of the two institutions. This should minimize coordination problems. &lt;br /&gt;
:&#039;&#039;Funding for the primary brain scientist in this research is at the 50% level and his supervisor has a nominal level of funding. This is a cause for concern, given the level of commitment required to support what is, essentially, someone else&#039;s area of research. A higher level of funding would be desirable, even if a substantial amount of the funded time is spent on pure brain science research. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal includes training a number of graduate and post doctoral students; in fact, most of the funding requested is for student support. &lt;br /&gt;
:&#039;&#039;As mentioned earlier, to the extent this work results in wider acceptance and usage of cognitive models, particularly in the development of scientific software, it will accelerate the construction of interactive systems which can be used efficiently. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This work should lead signficantly wider use of cognitive modeling in interactive systems design as well as provide researchers in brain science with superior tools.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #4&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Summary &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This proposal seeks to develop new visualization techniques that will assist brain scientists with the interpretation of high-dimensional data. For this purpose, the PI will incorporate design principles and knowledge from cognitive science, neuroscience and human computer interaction. the visualization system will also capture data of scientists as they use the tool, and compare it with computational models from cognition, perception and art. The tool will also be able to predict user performance and user state over time. The tool will be released through an open-source license, and will be incorporated into two courses. The team has worked together for a number of years. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 1. What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The kind of tool envisioned in this proposal would be invaluable, not only in neuroinformatics but also on other disciplines that deal with high-dimensional, multi-scale data, from social networks to geospatial information. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Despite its laudable objective, this work is not ready for further scrutiny as a full proposal. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;First, the proposal fails to articulate a clear research plan and a clear set of outcomes. As an example, the proposal mentions a number of cognitive concepts that will be incorporated in the interface, such as causal reasoning and dual systems theory. How are these principles going to be used to design a better visualization, and how are they going to be tested? I very much like the idea of using cognitive principles, but would have expected to see some indication of how they would be put into practice. The proposal is even less clear when it comes to goal maintenance: &amp;quot;We will use these principles to determine which tasks to make easily accessible to users and which to put in the background.&amp;quot; This is a general problem for interface design, not a solution to the problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;A more ambitious undertaking is that of predicting user performance. This aspect of the work is motivated by previous studies showing that student performance on algebra problems can be predicted based on eye movements. This is a very interesting result, but it is not clear how it would apply to an entirely different domain that requires a different and more taxing set of cognitive skills. This is a wild extrapolation. The proposal does not describe how user behavior will be measured (other than through eye trackers) or even how is performance going to be measured. Algebra problems are generally closed ended, with a well-defined solution, whereas exploratory data analysis of neuroinformatics data is an open-ended problem. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Another concern is methodological. Say that the proposal had articulated a clear plan and a reasonable set of deliverables for a new generation of visualization interfaces. Wouldn&#039;t it be better to test this interface on some benchmark problems, and see how it facilitates performance relative to a standard interface? These benchmark problems would be representative of the cognitive skills that the interface attempts to capture, and would also vary in their degree of complexity. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is very ambitious in its overall objectives. A visualization tool having the characteristics suggested in the proposal would be invaluable to brain science as well as to other scientific disciplines dealing with high-dimensional complex data, such as genomics/proteomics, geospatial analysis, network analysis, etc. However, the proposal fails to turn a high-level concept into a realizable implementation. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Is the work of sufficient import, scale, and/or complexity to justify this type of investment? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The brain is one of the scientific frontiers for the 21st century. The proposal has the complexity and scale worthy of this type of investment, but the proposal fails to deliver a realistic plan (if any plan at all) or even specifications. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Will the work contribute to realization of the EIC program goals and is it likely to demonstrate completion of these goals? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Understanding the brain is one of our greatest scientific challenges. Unfortunately, without a clear research plan it is difficult to asses the likelihood that the proposal will be able to demonstrate completion of its overall goals. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Value of the experimental systems or shared experimental facilities proposed &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators will utilize some shared facilities in their research and will share software and data that they produce to allow further research by others. The proposed software testbed will be used across the collaborators to test models of cognition and perception in the context of HCI. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Leadership and Collaboration Plan &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The investigators have worked together for a number of years, have taught classes together, and their students have attended classes from each other. No leadership or collaboration plan is discussed beyond this. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Criterion 2. What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Strengths &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed work would provide research opportunities for faculty, postdocs and students working in the project. A tool with the capabilities described in the proposal would mostly benefit the brain science research community. The tool will be used in two computer science courses at Brown. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Weaknesses &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Societal benefits of this tool would derive from its scientific merit to the extent that it would help understand the brain. Given the characteristics of this project, I wonder if other NSF funding opportunities would be more suitable, such as the FODAVA program or the interdisciplinary program in neuroscience at CISE. I also wonder whether this work should be funded instead by NIH (NIBIB, NIMH). The budget contains a request for $3,000 to cover costs of animal (mouse) care; why is this needed given that the proposal is for software development? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;This is a very ambitious proposal that seeks to develop a new generation of visualization tools for the analysis of neuroinformatics data. The tools would allow brain scientists to explore high-dimensional data, and the tool would also predict user performance and state. The proposal is inspired by principles from cognitive science, neuroscience and HCI. While this is the kind of big-picture, high-risk project that the Expeditions in Computing program is designed to support, the proposal itself fails to provide a plan for achieving its high-level, abstract objectives. The proposal does not provide a leadership or collaboration plan.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #5&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Fair&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;PROPOSAL OBJECTIVES AND APPROACH &lt;br /&gt;
:&#039;&#039;The proposal develops a variety of tools for interactive analysis and reasoning for brain scientists. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;INTELLECTUAL MERIT &lt;br /&gt;
:&#039;&#039;The project addresses research in three areas: human-computer interaction, cognitive modeling and the connectivity in the brain. It lists11 items that are to be developed. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal is vague and has a lot of repeatability. It is not well written. It is not clear what research experiments are performed. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The team is fine. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;There is a great need for the tools by the computational neuroscience and cognitive science community. This project will develop some of these tools. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The details of the educational plan are not given. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposal can be strengthened by focusing and making the challenges and ideas more clear.&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Review #6&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Number:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;1064261&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;NSF Program:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Experimental Expeditions&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Principal Investigator:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Laidlaw, David H&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Proposal Title:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Collaborative Research: Cognitive Optimization of Brain-Science Visual-Analysis Tools&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Rating:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039; &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Poor&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;REVIEW:&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What is the intellectual merit of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity of developing and improving neuroscience visual analysis tools is very important. Currently access to the genre of systems described in this proposal, especially in areas with complex sub-structure such as neuroscience, is lacking and the proposed activity could have a profound effect on the state-of-the-art in the field. The possible interplay between the user-interface experts and biomedical informatics developers is a possible strength. The available team and resources are very strong and capable with an excellent track record in the field. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;However, the proposed activities within the proposed are completely underspecified. Given the scale of the project and the challenging nature of the problems to be faced, the level of technical detail and planning presented in the proposal is insufficient and unconvincing. A great many claims are made in the proposal with no clear measurable end-points to determine how success within the project could be evaluated. Specifically, the authors claims to employ developments in concepts of cognition and perception to assist scientists reasoning. What aspects of reasoning? For which tasks? Within which discipline? If this is only related to tractography, what sort of scientific hypotheses to the applicants expect to address? Are they relating these representations of neural connectivity to studies in animals? Are they relating these analyses to other modalities of imaging data? How do they intend to reason over the complex semantics of these other experimental types? These are all glaring omissions from the proposal. The description of the cognitive science aspects of the project were marginally better specified, but I still found the details lacking. For example, the claim was made that &#039;our system will tune itself to individual work styles. How? What technical elements will the system exploit to accomplish this? In particular, the applicants must spend more time specifying the precise tasks that the system is designed to tackle before it is possible to improve or optimize performance at that task. &lt;br /&gt;
&lt;br /&gt;
::Response: We have added a section called &amp;quot;Task Analysis,&amp;quot; in which we outline a plan and preliminary results for specifying visual analysis tasks through user observation. ([[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 10:48, 22 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;In particular, the statement &amp;quot;We expect the core element to evolve significantly through the five years of the project. It cannot be meaningfully defined without the data we will acquire from users, so details beyond this overview are not possible yet&amp;quot; is incredibly revealing and suggests that the applicants themselves do not have a clear idea of how they intend to solve these problems. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The Figures presented in the proposal looked confusing and uninformative, adding nothing to the argument that these systems would actually help a scientist understand the underlying data. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;I very much liked the proposed idea of using the system directly in courses taught at Brown and other collaborating institutions. I think that this is a sizable innovation that would be very welcome in the field and might even form the basis of evaluation metrics for the success of the system (which could address one of my previous criticisms of the project). &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;What are the broader impacts of the proposed activity? &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The activity is well integrated with training and learning. There are a number of students in the group, all of whom would have the opportunity to work with professionals from a very different focus. The interdisciplinary nature of the work coupled with the need for analysis tools in biology would be an excellent synergy to cultivate. The presence of high-end graphics equipment (such as a virtual-reality cave and haptic displays, etc.) is a plus for the project but also is a hindrance to enable the developers to release their work to a broader audience. If the system is only available to the small number of people who have access to such facilities, then the impact of the work would be lessened. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The proposed activity makes no specific claims to target or support underrepresented groups explicitly. &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;The underspecification of the technical aspects of the project undermine the open-source distribution of the code. It is technically demanding to generate usable open source products for other people to use. Notably, browsing the co-PIs webpages, there were no easily accessible open-source software products noticeably available. &lt;br /&gt;
&lt;br /&gt;
:#&#039;&#039;&#039; Since the review, the PI&#039;s lab has published a large selection of its source code on its website (vis.cs.brown.edu), with provisions for multiple platforms.&lt;br /&gt;
:#&#039;&#039;&#039; Demonstrating the lab&#039;s commitment to openness, we are undertaking an ambitious project to make brain imaging data freely available to the scientific community.&lt;br /&gt;
:#&#039;&#039;&#039; Development will be done using a publicly visible source code repository (e.g., Github), so that other scientists and the public may be able to track progress, comment on changes, provide feedback, and even contribute pieces of code to the software.&lt;br /&gt;
::&#039;&#039;&#039; ([[User:Nathan Malkin|Nathan]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Summary Statement &lt;br /&gt;
:&#039;&#039;&lt;br /&gt;
:&#039;&#039;Although the high-level conceptualization of this project is exciting, the way that the project was described in the proposal was massively underspecified. Technical details were lacking and some fundamental aspects of the project&#039;s conceptualization in terms of the scientific domain under study were missing. There was no timetable, and no evaluation proposed to see how progress would be measured. The authors should be careful about making high-level claims concerning the possible impact of the proposed without a more carefully constructed argument to back up the claims.&lt;br /&gt;
:&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_3.11&amp;diff=5218</id>
		<title>CS295J/Literature class 3.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_3.11&amp;diff=5218"/>
		<updated>2011-09-20T16:32:54Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://delivery.acm.org/10.1145/1980000/1978995/p363-pan.pdf?ip=138.16.109.17&amp;amp;CFID=42891662&amp;amp;CFTOKEN=62664825&amp;amp;__acm__=1316465415_bf314f267e966fb3000292152378c16a Now Where Was I? Psychologically-Triggered Bookmarking], Pan et al., CHI &#039;11&lt;br /&gt;
: Presents an interaction paradigm for implicitly bookmarking application progress/media during user interruptions (e.g., phone ringing) using galvanic skin response (GSR) to identify interruptions, or the orienting response (OR), automatically.  The authors evaluate how well GSR works for identifying user ORs, and describe a few experiments using an audiobook listener application that creates bookmarks (stored and represented in a GUI) automatically when GSR peaks in response to controlled stimuli.  In our project, we&#039;re looking at predicting user states (or effect on task performance), so this is an example of that kind of predictive affect-tracking that has been engineered into a usability feature.  (Owner: [[User:Steven Gomez|Steven Gomez]], Discussant: Wenjun Wang, Discussant: Nathan)&lt;br /&gt;
&lt;br /&gt;
* [http://ccom.unh.edu/vislab/PDFs/PineoWareTAP.pdf Data Visualization Optimization Computational Modeling of Perception] Ware-2011-TVCG &lt;br /&gt;
: This paper presents a computational model of human vision that can be used to optimize and evaluate visualization systems. I think this is a great example of the application of cognitive modeling on visualization. (Owner: [[User:Hua Guo|Hua]], Discussant: Diem Tran, Discussant: ? Sep.19, 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1979013 Synchronous interaction among hundreds: an evaluation of a conference in an avatar-based virtual environment]CHI-2011&lt;br /&gt;
:This paper presents the first in-depth evaluation of a large multi-format virtual conference. The conference took place in an avatar-based 3D virtual world with spatialized audio, and had keynote, poster and social sessions. (Owner:[[User: Wenjun Wang|Wenjun Wang]],Discussant:?, Discussant:?)&lt;br /&gt;
&lt;br /&gt;
* [http://sfu.academia.edu/ChrisShaw/Papers/138757/BrainFrame_A_Knowledge_Visualization_System_for_the_Neurosciences Brainframe: A Knowledge Visualization System for the Neurosciences] Shaw-2009-KVS&lt;br /&gt;
: This paper begins with a brief overview of a problem plaguing the field of neuroscience today-- namely, that there is so much data available that it can&#039;t be synthesized in a useful way by researchers-- and the negative effects that arise as a result. The authors propose BrainFrame, a &amp;quot;knowledge management system,&amp;quot; designed to streamline this massive amount of data in a way that is sensitive to the limitations of human cognition and perception. (Owner: Michael Spector, Discussant: Diem Tran, Discussant: Clara Kliman-Silver)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=989863.989880&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=43496808&amp;amp;CFTOKEN=17350291 The Challenge of Information Visualization Evaluation] Plaisant-2004-CVE &lt;br /&gt;
: The paper describes current evaluation practices being used in visualization research and challenges researchers are facing. It then suggests possible steps to improve visualization evaluation. (Owner: Diem Tran, Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [https://docs.google.com/viewer?url=http%3A%2F%2Fwww.mifav.uniroma2.it%2Fiede_mk%2Fevents%2Fidea2010%2Fdoc%2FIxDEA_6_12.pdf Design and Evaluation of a Mobile Art Guide on iPod Touch]&lt;br /&gt;
: This paper evaluates the design principles behind an iPod app with respect to minimizing cognitive load and maximizing usability. Trade-offs between HCI technology and cognitive load are discussed. (Owner: Clara, Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://adrenaline.ucsd.edu/kirsh/articles/cogscijournal/DistinguishingEpi_prag.pdf On Distinguishing Epistemic from Pragmatic Action] Kirsh and Maglio&lt;br /&gt;
: This is an interesting paper for the theory of interaction. A common assumption in visualization user models is that the purpose of interaction is always to affect the state of a program. However, the authors make the argument that some interactions make more sense as techniques to make reasoning more efficient. They study players in a game of Tetris and find a high number of interactions that are not goal-directed, but which might orient the game pieces to help the user envision where they fit. Since visualization is all about understanding a data space, and goals are often open-ended, this kind of &amp;quot;epistemic action&amp;quot; may be an important part of our user models. (Owner: Caroline Ziemkiewicz, Discussion: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
** &amp;quot;The central claim of this article is that attention can be controlled in the same way as actions using similar learning mechanisms and by related areas of the brain.&amp;quot; &lt;br /&gt;
** &amp;quot;A computational model of attention is presented that uses habituation as well as classical and instrumental conditioning to explain a number of attentional processes.&amp;quot;&lt;br /&gt;
** &amp;quot;Computer simulations are presented that illustrates the operation of the model.&amp;quot;&lt;br /&gt;
: (Owner: [[User:Nathan Malkin|Nathan Malkin]], Discussant: ?, Discussant: ?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5217</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5217"/>
		<updated>2011-09-20T16:31:33Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: Talks about efforts to make UIs &amp;quot;aware&amp;quot; of their user&#039;s ability to attend and comprehend. [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
* [http://search.bwh.harvard.edu/new/pubs/Kunar%20et%20al.%202008%20P%26P.pdf The role of memory and restricted context in repeated visual search] Kunar-2008-RMR&lt;br /&gt;
: Why don&#039;t people who have to perform repeated visual search (searching through an unchanging display for hundreds of trials) use their memory to speed up their tasks? Several experiments reported in this paper &amp;quot;show that participants choose *not* to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient.&amp;quot; However, if you restrict where in the image the target stimuli may appear, using memory becomes more efficient.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pinker.wjh.harvard.edu/articles/papers/Pinker%20A%20Theory%20of%20Graph%20Comprehension.pdf A Theory of Graph Comprehension] Steven Pinker&lt;br /&gt;
: [[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 17:52, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://adrenaline.ucsd.edu/kirsh/articles/cogscijournal/DistinguishingEpi_prag.pdf On Distinguishing Epistemic from Pragmatic Actions] Kirsh and Maglio&lt;br /&gt;
: [[User:Caroline Ziemkiewicz|Caroline Ziemkiewicz]] 17:52, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978995/p363-pan.pdf?ip=138.16.109.17&amp;amp;CFID=42891662&amp;amp;CFTOKEN=62664825&amp;amp;__acm__=1316465415_bf314f267e966fb3000292152378c16a Now Where Was I? Psychologically-Triggered Bookmarking], Pan et al., CHI &#039;11&lt;br /&gt;
: Presents an interaction paradigm for implicitly bookmarking application progress/media during user interruptions (e.g., phone ringing) using galvanic skin response (GSR) to identify interruptions, or the orienting response (OR), automatically.  The authors evaluate how well GSR works for identifying user ORs, and describe a few experiments using an audiobook listener application that creates bookmarks (stored and represented in a GUI) automatically when GSR peaks in response to controlled stimuli.  In our project, we&#039;re looking at predicting user states (or effect on task performance), so this is an example of that kind of predictive affect-tracking that has been engineered into a usability feature.  ([[User:Steven Gomez|Steven Gomez]] 17:12, 19 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/people/benko/publications/2009/Ripples%20UIST09.pdf Ripples: Utilizing Per-Contact Visualizations to Improve User Interaction with Touch Displays], Wigdor-2009-PCV&lt;br /&gt;
: Demonstrates a system for visual feedback on touch screen interfaces, designed to reduce the ambiguity that can arise in the absence of feedback. The system changes the feedback based on the task at hand and claims to provide information in an intuitive way as to how the touch screen is registering the inputs. [http://research.microsoft.com/en-us/um/people/benko/publications/2009/Ripples_UIST.wmv Video demonstration] [[User:Michael Spector|Michael Spector]] 17:30, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1979013 Synchronous interaction among hundreds: an evaluation of a conference in an avatar-based virtual environment]CHI-2011&lt;br /&gt;
:This paper presents the first in-depth evaluation of a large multi-format virtual conference. The conference took place in an avatar-based 3D virtual world with spatialized audio, and had keynote, poster and social sessions. ([[User: Wenjun Wang|Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
** &amp;quot;The central claim of this article is that attention can be controlled in the same way as actions using similar learning mechanisms and by related areas of the brain.&amp;quot; &lt;br /&gt;
** &amp;quot;A computational model of attention is presented that uses habituation as well as classical and instrumental conditioning to explain a number of attentional processes.&amp;quot;&lt;br /&gt;
** &amp;quot;Computer simulations are presented that illustrates the operation of the model.&amp;quot;&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.computer.org/portal/web/csdl/doi/10.1109/IV.2003.1218045 CAEVA: Cognitive Architecture to Evaluate Visualization Applications] Juarez-Espinosa, 2003&lt;br /&gt;
: (May also be categorized in Evaluation) This paper, indicated by the author, is the first paper that uses cognitive modeling to evaluate computer application. This paper presents a rather high-level picture for a proposed cognitive architecture that is intended to be used to evaluate visualization applications. The proposed architecture has two main components: interoperability model and cognitive model. The cognitive model simulates a human interacting with the visualization system. It has two components, one on domain-dependent knowledge and one on domain-independent knowledge. The knowledge again is divided into two categories: declarative knowledge and procedural knowledge. The interoperability model describes how the cognitive model communicates with the visualization application.&lt;br /&gt;
: This paper does not provide detailed implementation or experiment with the proposed cognitive architecture. It seems like the proposed cognitive architecture is not fully implemented to the extent of fully functioning when the paper is published.&lt;br /&gt;
: ([[User:Hua Guo|Hua]], Sep. 19, 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ofai.at/research/agents/conf/at2ai6/papers/Muller.pdf Implementing a Cognitive Model in Soar and ACT-R: A Comparison] Muller&lt;br /&gt;
: This paper presents an implementation of a cognitive model in the cognitive architecture Soar. It also presents a comparison of this implementation with a previous implementation in ACT-R. The cognitive task is not quite relevant to our research; but it may help to get an idea of what it is like to implement a cognitive task in ACT-R/Soar. ([[User:Hua Guo|Hua]], Sep. 19, 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [https://docs.google.com/viewer?url=http%3A%2F%2Fwww.mifav.uniroma2.it%2Fiede_mk%2Fevents%2Fidea2010%2Fdoc%2FIxDEA_6_12.pdf Design and Evaluation of a Mobile Art Guide on iPod Touch]&lt;br /&gt;
: This paper evaluates the design principles behind an iPod app with respect to minimizing cognitive load and maximizing usability. Trade-offs between HCI technology and cognitive load are discussed. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 12:29, 19 September 2011 (EDT)--OWNER for Week 3)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
: Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
* [http://sfu.academia.edu/ChrisShaw/Papers/138757/BrainFrame_A_Knowledge_Visualization_System_for_the_Neurosciences Brainframe: A Knowledge Visualization System for the Neurosciences] Shaw-2009-KVS&lt;br /&gt;
: This paper begins with a brief overview of a problem plaguing the field of neuroscience today-- namely, that there is so much data available that it can&#039;t be synthesized in a useful way by researchers-- and the resulting negative effects that arise because of it. The authors propose BrainFrame, a &amp;quot;knowledge management system,&amp;quot; designed to streamline this massive amount of data in a way that is sensitive to the limitations of human cognition and perception. [[User:Michael Spector|Michael Spector]] 00:36, 20 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: The brain can be considered to be a small world type network. This paper &amp;quot;present[s] a method to create scalable, interactive visualizations of small world graphs, allowing the user to inspect local clusters while maintaining a global overview of the entire structure.&amp;quot; [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1260759&amp;amp;tag=1 Human factors in visualization research ] Tory-2004-HFV ([[User:Diem Tran|Diem Tran]] 15:15, 19 September 2011 (EDT))&lt;br /&gt;
: The paper discuss three following aspects about human factors in research:1) review known methodology for doing human factors research, with specific emphasis on visualization, 2) review current human factors research in visualization to provide a basis for future investigation, 3) identify promising areas for future research.&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]], Sep.12, 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]], Sep.12, 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=989863.989880&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=43496808&amp;amp;CFTOKEN=17350291 The Challenge of Information Visualization Evaluation] Plaisant-2004-CVE ([[User:Diem Tran|Diem Tran]] 15:15, 19 September 2011 (EDT))&lt;br /&gt;
: The paper describes current evaluation practices being used in visualization research and challenges researchers are facing. It then suggests possible steps to improve visualization evaluation.&lt;br /&gt;
&lt;br /&gt;
* [http://ccom.unh.edu/vislab/PDFs/PineoWareTAP.pdf Data Visualization Optimization Computational Modeling of Perception] Ware-2011-TVCG &lt;br /&gt;
: This paper presents a computational model of human vision that can be used to optimize and evaluate visualization systems. I think this is a great example of the application of cognitive modeling on visualization. ([[User:Hua Guo|Hua]], Sep.19, 2011)&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
== Visual Analysis ==&lt;br /&gt;
* [http://www.limsi.fr/Individu/tarroux/enseignement/old/FraundorferBischof-wapcv2003.pdf Utilizing Saliency Operators for Image Matching] Fraundorfer-2003-USO&lt;br /&gt;
: This paper from the field of computer vision describes mathematical and geometric techniques for identifying regions of saliency and &amp;quot;sub-saliency&amp;quot; in images.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/l81xhl05464u1473/ Contextual Priming for Object Detection] Torralba-2003-CPO&lt;br /&gt;
: This is another vision paper; it describes how we can use the context in which a region of an image appears to help with object detection in that region.&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_3.11&amp;diff=5213</id>
		<title>CS295J/Literature class 3.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_3.11&amp;diff=5213"/>
		<updated>2011-09-20T15:49:47Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://delivery.acm.org/10.1145/1980000/1978995/p363-pan.pdf?ip=138.16.109.17&amp;amp;CFID=42891662&amp;amp;CFTOKEN=62664825&amp;amp;__acm__=1316465415_bf314f267e966fb3000292152378c16a Now Where Was I? Psychologically-Triggered Bookmarking], Pan et al., CHI &#039;11&lt;br /&gt;
: Presents an interaction paradigm for implicitly bookmarking application progress/media during user interruptions (e.g., phone ringing) using galvanic skin response (GSR) to identify interruptions, or the orienting response (OR), automatically.  The authors evaluate how well GSR works for identifying user ORs, and describe a few experiments using an audiobook listener application that creates bookmarks (stored and represented in a GUI) automatically when GSR peaks in response to controlled stimuli.  In our project, we&#039;re looking at predicting user states (or effect on task performance), so this is an example of that kind of predictive affect-tracking that has been engineered into a usability feature.  (Owner: [[User:Steven Gomez|Steven Gomez]], Discussant: Wenjun Wang, Discussant: Nathan)&lt;br /&gt;
&lt;br /&gt;
* [http://ccom.unh.edu/vislab/PDFs/PineoWareTAP.pdf Data Visualization Optimization Computational Modeling of Perception] Ware-2011-TVCG &lt;br /&gt;
: This paper presents a computational model of human vision that can be used to optimize and evaluate visualization systems. I think this is a great example of the application of cognitive modeling on visualization. (Owner: [[User:Hua Guo|Hua]], Discussant: Diem Tran, Discussant: ? Sep.19, 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1979013 Synchronous interaction among hundreds: an evaluation of a conference in an avatar-based virtual environment]CHI-2011&lt;br /&gt;
:This paper presents the first in-depth evaluation of a large multi-format virtual conference. The conference took place in an avatar-based 3D virtual world with spatialized audio, and had keynote, poster and social sessions. (Owner:[[User: Wenjun Wang|Wenjun Wang]],Discussant:?, Discussant:?)&lt;br /&gt;
&lt;br /&gt;
* [http://sfu.academia.edu/ChrisShaw/Papers/138757/BrainFrame_A_Knowledge_Visualization_System_for_the_Neurosciences Brainframe: A Knowledge Visualization System for the Neurosciences] Shaw-2009-KVS&lt;br /&gt;
: This paper begins with a brief overview of a problem plaguing the field of neuroscience today-- namely, that there is so much data available that it can&#039;t be synthesized in a useful way by researchers-- and the negative effects that arise as a result. The authors propose BrainFrame, a &amp;quot;knowledge management system,&amp;quot; designed to streamline this massive amount of data in a way that is sensitive to the limitations of human cognition and perception. (Owner: Michael Spector, Discussant: Diem Tran, Discussant: Clara Kliman-Silver)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=989863.989880&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=43496808&amp;amp;CFTOKEN=17350291 The Challenge of Information Visualization Evaluation] Plaisant-2004-CVE &lt;br /&gt;
: The paper describes current evaluation practices being used in visualization research and challenges researchers are facing. It then suggests possible steps to improve visualization evaluation. (Owner: Diem Tran, Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [https://docs.google.com/viewer?url=http%3A%2F%2Fwww.mifav.uniroma2.it%2Fiede_mk%2Fevents%2Fidea2010%2Fdoc%2FIxDEA_6_12.pdf Design and Evaluation of a Mobile Art Guide on iPod Touch]&lt;br /&gt;
: This paper evaluates the design principles behind an iPod app with respect to minimizing cognitive load and maximizing usability. Trade-offs between HCI technology and cognitive load are discussed. (Owner: Clara, Discussant: ?, Discussant: ?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5175</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5175"/>
		<updated>2011-09-19T16:14:18Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
* [http://search.bwh.harvard.edu/new/pubs/Kunar%20et%20al.%202008%20P%26P.pdf The role of memory and restricted context in repeated visual search] Kunar-2008-RMR&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
:Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
== Visual Analysis ==&lt;br /&gt;
* [http://www.limsi.fr/Individu/tarroux/enseignement/old/FraundorferBischof-wapcv2003.pdf Utilizing Saliency Operators for Image Matching] Fraundorfer-2003-USO&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
* [http://www.springerlink.com/content/l81xhl05464u1473/ Contextual Priming for Object Detection] Torralba-2003-CPO&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5174</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5174"/>
		<updated>2011-09-19T16:11:02Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Cognition */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
* [http://search.bwh.harvard.edu/new/pubs/Kunar%20et%20al.%202008%20P%26P.pdf The role of memory and restricted context in repeated visual search] Kunar-2008-RMR&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
:Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
== Visual Analysis ==&lt;br /&gt;
* [http://www.limsi.fr/Individu/tarroux/enseignement/old/FraundorferBischof-wapcv2003.pdf Utilizing Saliency Operators for Image Matching] Fraundorfer-2003-USO&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5173</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5173"/>
		<updated>2011-09-19T16:08:07Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Visual Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
:Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
== Visual Analysis ==&lt;br /&gt;
* [http://www.limsi.fr/Individu/tarroux/enseignement/old/FraundorferBischof-wapcv2003.pdf Utilizing Saliency Operators for Image Matching] Fraundorfer-2003-USO&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5172</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5172"/>
		<updated>2011-09-19T16:07:49Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
:Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
== Visual Analysis ==&lt;br /&gt;
* [http://www.limsi.fr/Individu/tarroux/enseignement/old/FraundorferBischof-wapcv2003.pdf Utilizing Saliency Operators for Image Matching] Fraundorfer-2003-USO&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5171</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5171"/>
		<updated>2011-09-19T16:04:37Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210002852&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=31-Jan-2011&amp;amp;view=c&amp;amp;wchp=dGLbVBA-zSkWA&amp;amp;md5=b7ee648939c3d41e25e3317f8be617dc/1-s2.0-S0747563210002852-main.pdf Contemporary cognitive load theory research: The good, the bad and the ugly] Kirschner-2010-CCL&lt;br /&gt;
: This review summarizes and critiques 16 papers on cognitive load theory (CLT) and its impact on learning and ability to navigate different environments. It also discusses the difficulties inherent to the study of cognitive and moves made to attack them. While the paper does not contribute directly to our research, it does provide background on some of the issues of usability and &amp;quot;tolerance&amp;quot; of HCI systems that we discussed last week. [[User:Clara Kliman-Silver|Clara Kliman-Silver]] 13:54, 18 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://cacm.acm.org/magazines/2003/3/6879-models-of-attention-in-computing-and-communication/fulltext Models of attention in computing and communication: from principles to applications] Horvitz-2003-MAC&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 10:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://rl3tp7zf5x.scholar.serialssolutions.com/?sid=google&amp;amp;auinit=P&amp;amp;aulast=Slovic&amp;amp;atitle=The+construction+of+preference.&amp;amp;id=doi:10.1037/0003-066X.50.5.364&amp;amp;title=American+psychologist&amp;amp;volume=50&amp;amp;issue=5&amp;amp;date=1995&amp;amp;spage=364&amp;amp;issn=0003-066X The construction of preference ]&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively. (--- [[User:Chen Xu|Chen Xu]] 15:04, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER; Chen -- Discussant)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. [[User:Michael Spector|Michael Spector]] 13:19, 13 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;amp;_cid=271802&amp;amp;_user=489286&amp;amp;_pii=S0747563210001718&amp;amp;_check=y&amp;amp;_origin=&amp;amp;_coverDate=30-Nov-2010&amp;amp;view=c&amp;amp;wchp=dGLzVlt-zSkzS&amp;amp;md5=06d9eeba2447db1d43abcf99d1d7e995/1-s2.0-S0747563210001718-main.pdf Integrating cognitive load theory and concepts of human–computer interaction] Hollender-2010-ICL&lt;br /&gt;
: This paper compares existing models of cognitive load theory as it applies to HCI, reviews present literature, and discusses current problems and potential advances. Relevant to our work but ventures into theory that is heavier than necessary given our purposes. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 15:58, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4154947 Gesture Recognition : A Survey] IEEE&lt;br /&gt;
: This article provides a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. ([[User:Wenjun Wang|Wenjun Wang]] , 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
*[http://www.sciencedirect.com/science/article/pii/S1071581903001125 A person–artefact–task (PAT) model of flow antecedents in computer-mediated environments] Finneran-2003-PAT&lt;br /&gt;
: Re-evaluates flow theory within the HCI framework and proposes a model that fits best in the field [[User:Jenna Zeigen|Jenna Zeigen]] 11:50, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1980000/1978559/p62-modha.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315949230_6e20807eca999db6a25f54bcbf51ae6a Cognitive Computing], Modha-2011-CC&lt;br /&gt;
: Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain’s core algorithms.  (--- [[User:Chen Xu|Chen Xu]] 17:25, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: (This item can also go into the Evaluation category) This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.4716&amp;amp;rep=rep1&amp;amp;type=pdf Supporting Collaboration and Distributed Cognition in Context-Aware Pervasive Computing Environments] Fischer-2004-SCD&lt;br /&gt;
: (Not exactly sure where this paper belongs.) The paper looks at several issues in HCI: 1) collaborative design (multiple users accessing, manipulating, and addressing information, in what they call &amp;quot;large computational spaces&amp;quot;), 2) mobile technology (mobile phones, wireless, etc.), and 3) smart objects (seems to be largely mobile phones and similar devices). This paper is dated and parts of it are very interesting but equally irrelevant. Nonetheless, it asks some important questions about how to deliver information to the user, how to manage search techniques and memory systems (particularly with searching) in HCI, and how to access information, all of which are crucial to &amp;quot;modern&amp;quot; HCI research. ([[User:Clara Kliman-Silver|Clara Kliman-Silver]] 17:15, 18 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.180 Attention, Habituation and Conditioning: Toward a Computational Model] Balkenius-2000-AHC&lt;br /&gt;
: ([[User:Nathan Malkin|Nathan Malkin]], 19 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=223904.223945 Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention] Harrison-1995-TLU&lt;br /&gt;
: Proposes a framework for classifying and evaluating user interfaces with semi-transparent windows. Comes out of research investigating graphical user interfaces from an attentional perspective. [[User:Jenna Zeigen|Jenna Zeigen]] 11:31, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=3xwfia2DpmoC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=inspiration&amp;amp;ots=WxmfUK8fgu&amp;amp;sig=RZxglxiWjKIu5MHpYRAAO6cqR_I#v=onepage&amp;amp;q&amp;amp;f=false Autonomous robots: from biological inspiration to implementation and control ]&lt;br /&gt;
:Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. This book examines the underlying technology, including control, architectures, learning, manipulation, grasping, navigation, and mapping. Living systems can be considered the prototypes of autonomous systems. ([[User: Wenjun Wang | Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1180000/1179816/p168-klein.pdf?ip=138.16.160.6&amp;amp;CFID=41737444&amp;amp;CFTOKEN=92942121&amp;amp;__acm__=1315948835_60407af6754ff14995331d7da5b2e5f2 Brain structure visualization using spectral fiber clustering] Klein-2006-BSV&lt;br /&gt;
: Present a novel algorithm that allows for visualizing white matter fiber tracts in real time. And the result is more accurate. This visualizing algorithm might be adopted in our proposal. ( --- [[User:Chen Xu|Chen Xu]] 17:21, 13 September 2011 (EDT) -OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/570000/566646/p745-van_wijk.pdf?ip=138.16.160.6&amp;amp;CFID=45026276&amp;amp;CFTOKEN=25635165&amp;amp;__acm__=1316442526_0ba5fadd25f8f9e571d2e63ba88f17f9 Image Based Flow Visualization] van Wijk-2002-IBF&lt;br /&gt;
: Talked about a two-dimensional fluid flow visualization method which was based on advection and decay of dye. It was called IBFV, with IBFV, a wide variety of visualization techniques can be emulated. It can visualize flow, generate arrow plots, streamlines, particles and topological images, and handle unsteady flows.(Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2004.00753.x/full The State of the Art in Flow Visualization: Dense and Texture-Based Techniques] Laramee-2004-SAF&lt;br /&gt;
: Discussed dense, texture-based flow visualization techniques. (Chen)&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382909 Interactive Visualization of Small World Graphs] van Ham-2004-IVS&lt;br /&gt;
: [[User:Jenna Zeigen|Jenna Zeigen]] 11:03, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=wdh2gqWfQmgC&amp;amp;oi=fnd&amp;amp;pg=PR13&amp;amp;dq=visualization&amp;amp;ots=olzI7xnGLy&amp;amp;sig=7F0m2_NpZcU-fr0V5CPzf99PpK4#v=onepage&amp;amp;q&amp;amp;f=false Readings in information visualization: using vision to think ] By Stuart K. Card, Jock D. Mackinlay, Ben Shneiderman.  ([[User: Wenjun Wang | Wenjun Wang]]) 11:09, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=546918 Visualization Toolkit: An Object-Oriented Approach to 3-D Graphics ]   ([[User: Wenjun Wang | Wenjun Wang]]) 11:15, 19 September 2011 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
*[http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2006.70 Toward Measuring Visualization Insight]&lt;br /&gt;
: Do current approaches for evaluating visualizations provide measures of insight? This viewpoint identifies critical characteristics of insight, argues the fundamental reasons why traditional controlled experiments with benchmark tasks on visualizations do not effectively measure insight, and offers a new approach to controlled experiments that can better capture the notion of insight. ([[User: Wenjun Wang| Wenjun Wang]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://leadserv.u-bourgogne.fr/files/publications/000599-multi-label-classification-of-music-into-emotions.pdf  MULTI-LABEL CLASSIFICATION OF MUSIC INTO EMOTIONS] ISMIR 2008&lt;br /&gt;
: Humans, by nature ,  are emotionally affected by music. This paper is an interdisciplinary one, which approaches towards automated emotion detection in music. Four algorithms are evaluated and compared in this task ([[User:Wenjun Wang|Wenjun Wang]], 19 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5120</id>
		<title>CS295J/Literature class 2.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5120"/>
		<updated>2011-09-15T16:15:10Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: summary description&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://dl.acm.org/citation.cfm?id=1461832 Project Ernestine: Validating a GOMS Analysis for Predicting and Explaining] ([[image:Project_Ernestine_Paper.pdf]]) A research project from way back that demonstrated that modeling cognition plus motor plus perceptual tasks by telephone operators could predict the efficiency of a new user interface.  The efficiency turned out to be lower than the old, low-tech version, which was a surprise.  This paper is just the kind of result I&#039;d like to be able to publish about more complex user interfaces.  (Owner: David Laidlaw, discussion: Stephen Brawner, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=642656 Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays] This paper uses predictive modeling and eye-tracking data together to explain search behavior in hierarchical or non-hierarchical layouts. The layouts were lists of items organized in labeled groups, with the labels being either useful (hierarchical condition) or random (non-hierarchical condition).  The research question is about whether people use different strategies when searching for a target item in each condition. They compared their model&#039;s predictions to observed eye movements and found them to be a pretty good fit, and therefore characterized search strategies using the model. (Owner: Caroline Ziemkiewicz, discussion: Chen, discussant: Diem Tran)&lt;br /&gt;
&lt;br /&gt;
* [http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000597 Mapping Human Whole-Brain Structural Networks with Diffusion MRI] The authors use diffusion MRI to create network maps with significantly larger detail than previous models of physical connectivity. Their methods allow them to study live humans and model the interconnectivity of neuronal groups as networks with thousands of nodes versus previous methods with less than 100 nodes studied from post-mortem animal subjects. Based on these new experimental methods they demonstrate that the brain network is in the form of a small world. (Owner: Stephen Brawner, discussant: Jenna Zeigen, discussant: Caroline Ziemkiewicz)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  (Owner: [[User:Steven Gomez|Steven Gomez]], discussion: Chen, discussant: Nathan)&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU (Posted by Michael Spector)&lt;br /&gt;
: The authors of this paper discuss the feasibility of muscle-computer interaction by demonstrating that simple finger gestures can be accurately detected using EMG data gathered from the upper forearm. While very accurate EMG data can be obtained with more invasive methods, the paper focuses on the applications of devices that can be used with ease in everyday situations: the example given is an armband type sensor. Muscle-computer interfaces like these are relatively undeveloped but could provide an intuitive means for interacting with computer systems, and through them, complicated visual representations of information. Interfaces like the one proposed in this paper are particularly good for scenarios where the user needs to be able to multitask quickly between the computer system and other activities, an important quality for scientific problem solving. While this paper does not explicitly touch upon cognitive aspects of HCI, its primary focus is to develop a system that lessens the cognitive strain on the user, with practicality and ease of use. The technical details of the experiment are less pertinent to our project, but the underlying message remains relevant. (Owner: Michael Spector, Discussant: Clara Kliman-Silver, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD. (Posted by Michael Spector)&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Owner: ? Discussant: Caroline Ziemkiewicz Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978996 This is your brain on interfaces: enhancing usability testing with functional near-infrared spectroscopy] Hirshfield-2011-BOI&lt;br /&gt;
: The authors use a brain measuring device to detect activations in the brain when a user is performing a task. In this way, the authors are able to measure a range of cognitive workload states, known as subjective factors, which are difficult to measure using qualitative studies. They quantify workloads of users in three different UI tasks and point out the usability of a UI design choice as well as the low-level cognitive resources in the brain correspond to a task. How this paper relates to our project - As this paper presents a novel approach to measure effectiveness of a UI, it is relevant that we consider this in our process to evaluate the tools we develop. (Owner: Diem Tran, Discussant: Hua Guo, Discussant: Michael Spector)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Owner: Hua Guo, Discussant: Clara Kliman-Silver, Discussant: Jenna Zeigen)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article discusses search performance strategies in user interfaces per the results of an eye-tracking study. Specific attention is given to menu organization in the context of Web interfaces; the study asks whether people can learn to navigate menu environments that differ from standard layouts. Results show that there able to adapt to new layouts, even they violate previous expectations. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. It also serves to reinforce the importance of eye tracking in HCI research and how exactly people analyze the data, techniques we can take into account as we seek new approaches for our own HCI research. (Owner: Clara Kliman-Silver, Discussant: Michael Spector, Discussant: Wenjun Wang)&lt;br /&gt;
&lt;br /&gt;
*[http://www.sagepub.com/mcbridestudysite/study/Chapters/articles/Ch11_Breslow_Article.pdf Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis. Discusses the differences between absolute identification and relative comparison tasks and the implications of different types of color scales on the performance of such tasks. The authors create computational models of the processes and then compare their predictions to the results of two experiments. This paper is relevant because the visualizations we create probably will involve color scales, and the analysis of our visualizations probably will involve both types of tasks described.  (Owner: Jenna Zeigen, Discussant: [[User:Steven Gomez|Steven Gomez]], Discussant: Stephen Brawner)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;.&lt;br /&gt;
: Specifically, it:&lt;br /&gt;
# Provides a brief overview of the human visual system and the concept of visual attention&lt;br /&gt;
# Presents and discusses a number of different psychological and cognitive models of attention&lt;br /&gt;
# Reports on the most common approaches to visual attention used by computational systems&lt;br /&gt;
# Discusses the evaluation of these systems, their applications in computer vision and robotics, and open questions in the field&lt;br /&gt;
: &lt;br /&gt;
: This relates to our project in that:&lt;br /&gt;
# It provides an overview, and lots of references, to models of attention, which is generally useful for understanding human-computer interaction.&lt;br /&gt;
# More specifically, predictive models could be used to identify regions of (particular) interest to users of the software, for example to provide additional details, auto-focus, or auto-select. These models, and their predictions, could be verified and compared with eye-tracking data.&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;(Owner: [[User:Nathan Malkin|Nathan Malkin]], Discussant: Hua Guo , Discussant: Wenjun Wang)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively.( Owner:Chen, Discussant: Nathan, Discussant: Diem Tran)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. &lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations (Owner:  Wenjun Wang, Discussant: [[User:Steven Gomez|Steven Gomez]], Discussant: ?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5104</id>
		<title>CS295J/Literature class 2.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5104"/>
		<updated>2011-09-14T19:55:15Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://dl.acm.org/citation.cfm?id=1461832 Project Ernestine] A research project from way back that demonstrated that modeling cognition plus motor plus perceptual tasks by telephone operators could predict the efficiency of a new user interface.  The efficiency turned out to be lower than the old, low-tech version, which was a surprise.  This paper is just the kind of result I&#039;d like to be able to publish about more complex user interfaces.  (Owner: David Laidlaw, discussion: ?, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=642656 Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays] This paper uses predictive modeling and eye-tracking data together to explain search behavior in hierarchical or non-hierarchical layouts. The layouts were lists of items organized in labeled groups, with the labels being either useful (hierarchical condition) or random (non-hierarchical condition).  The research question is about whether people use different strategies when searching for a target item in each condition. They compared their model&#039;s predictions to observed eye movements and found them to be a pretty good fit, and therefore characterized search strategies using the model. (Owner: Caroline Ziemkiewicz, discussion: Chen, discussant: Diem Tran)&lt;br /&gt;
&lt;br /&gt;
* [http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000597 Mapping Human Whole-Brain Structural Networks with Diffusion MRI] The authors use diffusion MRI to create network maps with significantly larger detail than previous models of physical connectivity. Their methods allow them to study live humans and model the interconnectivity of neuronal groups as networks with thousands of nodes versus previous methods with less than 100 nodes studied from post-mortem animal subjects. Based on these new experimental methods they demonstrate that the brain network is in the form of a small world. (Owner: Stephen Brawner, discussant: Jenna Zeigen, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  (Owner: [[User:Steven Gomez|Steven Gomez]], discussion: Chen, discussant: Nathan)&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU (Posted by Michael Spector)&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. (Owner: Michael Spector, Discussant: Clara Kliman-Silver, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD. (Posted by Michael Spector)&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Owner: ? Discussant: ? Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978996 This is your brain on interfaces: enhancing usability testing with functional near-infrared spectroscopy] Hirshfield-2011-BOI&lt;br /&gt;
: The authors use a brain measuring device to detect activations in the brain when a user is performing a task. In this way, the authors are able to measure a range of cognitive workload states, known as subjective factors, which are difficult to measure using qualitative studies. They quantify workloads of users in three different UI tasks and point out the usability of a UI design choice as well as the low-level cognitive resources in the brain correspond to a task. How this paper relates to our project - As this paper presents a novel approach to measure effectiveness of a UI, it is relevant that we consider this in our process to evaluate the tools we develop. (Owner: Diem Tran, Discussant: Hua Guo, Discussant: Michael Spector)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Owner: Hua Guo, Discussant: Clara Kliman-Silver, Discussant: Jenna Zeigen)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article discusses search performance strategies in user interfaces per the results of a series of eye-tracking studies. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. It also serves to reinforce the importance of eye tracking in HCI research and how exactly people analyze the data. (Owner: Clara Kliman-Silver, Discussant: Michael Spector, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis. Discusses the differences between absolute identification and relative comparison tasks and the implications of different types of color scales on the performance of such tasks. The authors create computational models of the processes and then compare their predictions to the results of two experiments. This paper is relevant because the visualizations we create probably will involve color scales, and the analysis of our visualizations probably will involve both types of tasks described.  (Owner: Jenna Zeigen, Discussant: ? , Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;(Owner: [[User:Nathan Malkin|Nathan Malkin]], Discussant: Hua Guo , Discussant: ?)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively.( Owner:Chen, Discussant: Nathan, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. &lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations (Owner:  Wenjun Wang, Discussant: ? , Discussant: ?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5103</id>
		<title>CS295J/Literature class 2.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5103"/>
		<updated>2011-09-14T19:50:48Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://dl.acm.org/citation.cfm?id=1461832 Project Ernestine] A research project from way back that demonstrated that modeling cognition plus motor plus perceptual tasks by telephone operators could predict the efficiency of a new user interface.  The efficiency turned out to be lower than the old, low-tech version, which was a surprise.  This paper is just the kind of result I&#039;d like to be able to publish about more complex user interfaces.  (Owner: David Laidlaw, discussion: ?, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=642656 Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays] This paper uses predictive modeling and eye-tracking data together to explain search behavior in hierarchical or non-hierarchical layouts. The layouts were lists of items organized in labeled groups, with the labels being either useful (hierarchical condition) or random (non-hierarchical condition).  The research question is about whether people use different strategies when searching for a target item in each condition. They compared their model&#039;s predictions to observed eye movements and found them to be a pretty good fit, and therefore characterized search strategies using the model. (Owner: Caroline Ziemkiewicz, discussion: Chen, discussant: Diem Tran)&lt;br /&gt;
&lt;br /&gt;
* [http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000597 Mapping Human Whole-Brain Structural Networks with Diffusion MRI] The authors use diffusion MRI to create network maps with significantly larger detail than previous models of physical connectivity. Their methods allow them to study live humans and model the interconnectivity of neuronal groups as networks with thousands of nodes versus previous methods with less than 100 nodes studied from post-mortem animal subjects. Based on these new experimental methods they demonstrate that the brain network is in the form of a small world. (Owner: Stephen Brawner, discussant: Jenna Zeigen, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  (Owner: [[User:Steven Gomez|Steven Gomez]], discussion: Chen, discussant: Nathan)&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU (Posted by Michael Spector)&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. (Owner: Michael Spector, Discussant: Clara Kliman-Silver, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD. (Posted by Michael Spector)&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Owner: ? Discussant: ? Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978996 This is your brain on interfaces: enhancing usability testing with functional near-infrared spectroscopy] Hirshfield-2011-BOI&lt;br /&gt;
: The authors use a brain measuring device to detect activations in the brain when a user is performing a task. In this way, the authors are able to measure a range of cognitive workload states, known as subjective factors, which are difficult to measure using qualitative studies. They quantify workloads of users in three different UI tasks and point out the usability of a UI design choice as well as the low-level cognitive resources in the brain correspond to a task. How this paper relates to our project - As this paper presents a novel approach to measure effectiveness of a UI, it is relevant that we consider this in our process to evaluate the tools we develop. (Owner: Diem Tran, Discussant: Hua Guo, Discussant: Michael Spector)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Owner: Hua Guo, Discussant: Clara Kliman-Silver, Discussant: Jenna Zeigen)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article discusses search performance strategies in user interfaces per the results of a series of eye-tracking studies. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. It also serves to reinforce the importance of eye tracking in HCI research and how exactly people analyze the data. (Owner: Clara Kliman-Silver, Discussant: Michael Spector, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis. Discusses the differences between absolute identification and relative comparison tasks and the implications of different types of color scales on the performance of such tasks. The authors create computational models of the processes and then compare their predictions to the results of two experiments. This paper is relevant because the visualizations we create probably will involve color scales, and the analysis of our visualizations probably will involve both types of tasks described.  (Owner: Jenna Zeigen, Discussant: ? , Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;(Owner: [[User:Nathan Malkin|Nathan Malkin]], Discussant: Hua Guo , Discussant: ?)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interactive visualization interface for 3D selection of neural pathways of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and intuitively.( Owner:Chen, Discussant: ?, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. &lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations (Owner:  Wenjun Wang, Discussant: ? , Discussant: ?)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5092</id>
		<title>CS295J/Literature class 2.11</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_class_2.11&amp;diff=5092"/>
		<updated>2011-09-14T17:18:51Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://dl.acm.org/citation.cfm?id=1461832 Project Ernestine] A research project from way back that demonstrated that modeling cognition plus motor plus perceptual tasks by telephone operators could predict the efficiency of a new user interface.  The efficiency turned out to be lower than the old, low-tech version, which was a surprise.  This paper is just the kind of result I&#039;d like to be able to publish about more complex user interfaces.  (Owner: David Laidlaw, discussion: ?, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=642656 Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays] This paper uses predictive modeling and eye-tracking data together to explain search behavior in hierarchical or non-hierarchical layouts. The layouts were lists of items organized in labeled groups, with the labels being either useful (hierarchical condition) or random (non-hierarchical condition).  The research question is about whether people use different strategies when searching for a target item in each condition. They compared their model&#039;s predictions to observed eye movements and found them to be a pretty good fit, and therefore characterized search strategies using the model. (Owner: Caroline Ziemkiewicz, discussion: ?, discussant: Diem Tran)&lt;br /&gt;
&lt;br /&gt;
* [http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000597 Mapping Human Whole-Brain Structural Networks with Diffusion MRI] The authors use diffusion MRI to create network maps with significantly larger detail than previous models of physical connectivity. Their methods allow them to study live humans and model the interconnectivity of neuronal groups as networks with thousands of nodes versus previous methods with less than 100 nodes studied from post-mortem animal subjects. Based on these new experimental methods they demonstrate that the brain network is in the form of a small world. (Owner: Stephen Brawner, discussant: Jenna Zeigen, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  (Owner: [[User:Steven Gomez|Steven Gomez]], discussion: ?, discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU (Posted by Michael Spector)&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. (Owner: Michael Spector, Discussant: Clara Kliman-Silver, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD. (Posted by Michael Spector)&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Owner: ? Discussant: ? Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978996 This is your brain on interfaces: enhancing usability testing with functional near-infrared spectroscopy] Hirshfield-2011-BOI&lt;br /&gt;
: The authors use a brain measuring device to detect activations in the brain when a user is performing a task. In this way, the authors are able to measure a range of cognitive workload states, known as subjective factors, which are difficult to measure using qualitative studies. They quantify workloads of users in three different UI tasks and point out the usability of a UI design choice as well as the low-level cognitive resources in the brain correspond to a task. How this paper relates to our project - As this paper presents a novel approach to measure effectiveness of a UI, it is relevant that we consider this in our process to evaluate the tools we develop. (Owner: Diem Tran, Discussant: Hua Guo, Discussant: Michael Spector)&lt;br /&gt;
&lt;br /&gt;
* [http://rp-www.cs.usyd.edu.au/~whua5569/papers/infovis09.pdf Measuring effectiveness of graph visualizations: a cognitive load perspective] Huang-2009-JIV&lt;br /&gt;
: This paper discusses the cognitive load theory and its application in measuring effectiveness of graph visualization. A model of user task performance, mental effort and cognitive load has been proposed and experiments have been conducted to refine the model. This seems to be an attempt along the line of defining quality metrics for visualization through cognitive modeling, which then closely relates to our proposal. (Owner: Hua Guo, Discussant: Clara Kliman-Silver, Discussant: Jenna Zeigen)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article discusses search performance strategies in user interfaces per the results of a series of eye-tracking studies. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. It also serves to reinforce the importance of eye tracking in HCI research and how exactly people analyze the data. (Owner: Clara Kliman-Silver, Discussant: Michael Spector, Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis. Discusses the differences between absolute identification and relative comparison tasks and the implications of different types of color scales on the performance of such tasks. The authors create computational models of the processes and then compare their predictions to the results of two experiments. This paper is relevant because the visualizations we create probably will involve color scales, and the analysis of our visualizations probably will involve both types of tasks described.  (Owner: Jenna Zeigen, Discussant: ? , Discussant: ?)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;(Owner: [[User:Nathan Malkin|Nathan Malkin]], Discussant: ? , Discussant: ?)&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5066</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5066"/>
		<updated>2011-09-13T16:54:42Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &amp;quot;owning&amp;quot; an article&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon]) (Hua - OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
: &amp;lt;span style=&amp;quot;color: green; font-weight: bold&amp;quot;&amp;gt;Owner: [[User:Nathan Malkin|Nathan Malkin]]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
: A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and easily. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU (&amp;lt;b&amp;gt;Owner: Michael Spector&amp;lt;/b&amp;gt;)&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. (Michael)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/990000/989880/p109-plaisant.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932639_76770a28b192503c5f3c0ac3f997a8f1 The Challenge of Information Visualization Evaluation] Plaisant-2004&lt;br /&gt;
: This paper surveys the field of information visualization evaluation - current practices, challenges and possible next steps. It is a relatively old article, though, so it may be replaced by a more concurrent survey. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1170000/1168162/a9-zuk.pdf?ip=138.16.160.6&amp;amp;CFID=41672001&amp;amp;CFTOKEN=81772969&amp;amp;__acm__=1315932894_18cbcddc03c6ac39fc573f72790ea5d1 Heuristics for Information Visualization Evaluation] Zuk et al-2006&lt;br /&gt;
: This paper attempts to leverage some well-known heuristic evaluation used in HCI to Information Visualization. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5061</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5061"/>
		<updated>2011-09-13T16:48:32Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Cognition */  removing duplicate entry&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1753357 Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visual Design], Heer and Bostock, CHI &#039;10 &lt;br /&gt;
: This paper explores crowdsourcing as a viable method for conducting visualization perception evaluations. They replicate some results of Cleveland and McGill&#039;s 1984 graphical perception paper, and do some analysis on cost and performance of using MTurk for these studies on static, chart-type visualizations. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/PSYCO354/pdfstuff/Readings/Evans2.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon]) (Hua - OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1518701.1518717&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Getting inspired!: understanding how and why examples are used in creative design practice] Herring-2009-GIU&lt;br /&gt;
A user study on the use of examples to improve creativity. Results show that examples are very useful to inspire designers of new ideas. Surprisingly, inspiring examples are not limited to the ones in the design domain, but are expanded to other areas too.&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
: Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
: This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
: Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
: Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
: Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG), 5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
: the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.pdf The Skull beneath the Skin: Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and easily. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
: This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. Specific attention is given to menu organization in the context of Web interfaces. Although substantial progress has been made in the past decade, the article draws attention to relevant design issues and concepts, especially as eye tracking methodologies continue continue to grow and improve. (Clara, 11 September 2011--OWNER)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
: Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www.comp.leeds.ac.uk/umuas/reading-group/kaptelinin-ch5.pdf Activity Theory: Implications for Human-Computer Interaction] Kaptelinin--1996-ATI&lt;br /&gt;
: This article discusses activity theory, an alterate to present theories surrounding HCI. In particular, it examines the principle differences between activity theory and cognitive theory, applies it to HCI, and suggests implications for the field. While not directly relevant to the proposal, it offers an alternate framework for some of the issues that we discuss. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://pages.cpsc.ucalgary.ca/~saul/wiki/uploads/HCIPapers/landauer-letsgetreal.pdf Let&#039;s Get Real: a Position Paper on the Role of Cognitive Psychology in the Design of Humanely Usable and Useful Systems] Landauer-1991-LGR&lt;br /&gt;
: Perhaps less useful, only because it&#039;s 20 years old, but an interesting read nonetheless: this paper questions the &amp;quot;modern&amp;quot; relevance of cognitive psychology to human-computer interaction design. The primary issue, it argues, is that human-computer systems are entirely unpredictable, and thus, some of the modern understanding of cognition (and, indeed, HCI theory) simply cannot apply given the erratic behavior of computer systems. Instead, he addresses some of the more &amp;quot;useful models,&amp;quot; including Fitt&#039;s law and theories of visual perception, to define a new space for emerging research in HCI. (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1978969&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Mid-air pan-and-zoom on wall-sized displays] Nancel-2011-MPZ&lt;br /&gt;
: The paper describes approaches to perform pan and zoom tasks in mid-air: bimanual &amp;amp; unimanual, linear &amp;amp; circular gestures. &lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979430&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 LiquidText: a flexible, multitouch environment to support active reading] Tashman-2011-LER&lt;br /&gt;
: A technique utilizing multitouch to improve reading efficiency.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1978942.1979392&amp;amp;coll=DL&amp;amp;dl=GUIDE&amp;amp;CFID=42040751&amp;amp;CFTOKEN=34436229 Rethinking &#039;multi-user&#039;: an in-the-wild study of how groups approach a walk-up-and-use tabletop interface] Marshall-2011-RMU&lt;br /&gt;
: An ethnographic study that explores how groups of users approach tabletop interfaces in real environments. Some results contradict existing findings.&lt;br /&gt;
&lt;br /&gt;
* [http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/CHI2008-EMG.pdf Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces] Saponas-2008-DFU (&amp;lt;b&amp;gt;Owner: Michael Spector&amp;lt;/b&amp;gt;)&lt;br /&gt;
: Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. (Michael)&lt;br /&gt;
&lt;br /&gt;
*[http://courses.ischool.utexas.edu/rbias/2009/Spring/INF385P/files/annurev.psych.54.101601.pdf HUMAN-COMPUTER INTERACTION: Psychological Aspects of the Human Use of Computing] Olson-2003-HCI&lt;br /&gt;
: Overview of issues in psychology in HCI ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
: This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
: High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R]&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar]&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.  (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.4719&amp;amp;rep=rep1&amp;amp;type=pdf Brain-Computer Interfaces and Human-Computer Interaction]&lt;br /&gt;
: (Not sure what heading this ought to go under!) Brain-Computer Interfaces (BCI) offer a neurological corollary to human-computer interfaces: users use their thoughts to signal to machines, instead of relying on physical movements. Thus, the areas activated are purely cognitive, not motor. The article provides an overview of the differences between HCI and BCI, the implications thereof, and the directions that their interaction may take. Relevant to some of the issues and concepts raised in the proposal, in addition to being a rather interesting idea! (Clara, 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=101 Spanning Seven Orders of Magnitude: A Challenge for Cognitive Modeling], John Anderson, 2002&lt;br /&gt;
: The paper argues that high-level human behavior can be understood by analyzing the chain of fast, low level activity (from 10ms up) in the perceptual/cognitive bands that compose larger behaviors. It gives an intro to ACT-R and variants and some compelling examples for cognitive modeling and eye-tracking. ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
*[http://search.ebscohost.com.revproxy.brown.edu/login.aspx?direct=true&amp;amp;db=psyh&amp;amp;AN=2003-08881-009&amp;amp;site=ehost-live Cyberpsychology: A Human-Interaction Perspective Based on Cognitive Modeling] Emond-2003-CHI&lt;br /&gt;
:This paper talks about for the applicability of cognitive modeling to cyberpsychology, the study of the impact of computer and Internet interaction on humans. ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
: Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara, 11 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1979100 Understanding Interaction Design Practices], Goodman et al., CHI &#039;11 (Owner: Diem Tran - Steven, if you already own this, let me know)&lt;br /&gt;
: This is a position paper describing the disconnect between HCI research and real interaction design practices.  It analyzes approaches for studying design practice (e.g., reported practice, anecdotal descriptions, first-person research), and argues a need for generative theories of design in order to address practice.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1495824 Data, Information, and Knowledge in Visualization], Chen et al., CG&amp;amp;A Jan 09&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.  (As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1399136 Toward a Perceptual Theory of Flow Visualization], Colin Ware, CG&amp;amp;A March 08&lt;br /&gt;
: This paper is a good entry point for Ware&#039;s other work on neural modeling for visualization.  It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields).  There&#039;s also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories.  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1137505 An Approach to the Perceptual Optimization of Complex Visualizations], House, Bair, and Ware, TVCG, 2006&lt;br /&gt;
: This paper describes a humans-in-the-loop architecture for guiding layered visualizations with multiple visual parameters toward optimal tunings.  They use a genetic algorithm to iteratively produce new &amp;quot;genomes&amp;quot; of visual parameters that are evaluated by humans (and either passed along or terminated in the genetic process).  Finally they do some analysis on the surviving visualization space (though for me, this was less interesting than the generative visualization method using humans and the GA).  ([[User:Steven Gomez|Steven Gomez]])&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
*[http://ejournals.ebsco.com.revproxy.brown.edu/Direct.asp?AccessToken=6VFL2LL89KIIOIXL2I3O9IOHVIC28L2MMF&amp;amp;Show=Object Cognitive Models of the Influence of Color Scale on Data Visualization Tasks] -Breslow-2009-CMI&lt;br /&gt;
: Discusses the ways color scales and differences can be influential in the optimization of data visualization and analysis ([[User:Jenna Zeigen|Jenna Zeigen]], 9/12/11) -- &#039;&#039;&#039;Owner: Jenna Zeigen&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://www2.iwr.uni-heidelberg.de/groups/CoVis/Data/Papers/euroVis10.pdf A Salience-based Quality Metric for Visualization] Jänicke1-Chen-2010&lt;br /&gt;
: This paper describes a method for defining quality metrics for visualization based on the distribution of salience over a visualization image. ([[User:Hua Guo|Hua]])&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5001</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5001"/>
		<updated>2011-09-12T16:19:51Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: clearing timestamps and timezones, for cleaner formatting&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sjdm.org/mail-archive/jdm-society/attachments/20060401/963b50da/attachment-0003.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;small&amp;gt;[http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
&lt;br /&gt;
Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
&lt;br /&gt;
Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;250-word summary and relevance statement&#039;&#039;&#039; (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG),&lt;br /&gt;
5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
* the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.PDF The Skull beneath the Skin:&lt;br /&gt;
Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and easily. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
&lt;br /&gt;
High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R] (Hua)&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar] (Hua)&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* Min Chen, David Ebert, Hans Hagen, Robert S. Laramee, Robert van Liere, Kwan-Liu Ma, William Ribarsky, Gerik Scheuermann, Deborah Silver, &amp;quot;Data, Information, and Knowledge in Visualization,&amp;quot; IEEE Computer Graphics and Applications, vol. 29, no. 1, pp. 12-19, Jan./Feb. 2009, doi:10.1109/MCG.2009.6&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.&lt;br /&gt;
(As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]], 12 September 2011)&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5000</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=5000"/>
		<updated>2011-09-12T16:16:57Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: added missing signature&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. ([[User:Nathan Malkin|Nathan Malkin]] 12:16, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sjdm.org/mail-archive/jdm-society/attachments/20060401/963b50da/attachment-0003.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;small&amp;gt;[http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
&lt;br /&gt;
Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
&lt;br /&gt;
Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;250-word summary and relevance statement&#039;&#039;&#039; (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG),&lt;br /&gt;
5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
* the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.PDF The Skull beneath the Skin:&lt;br /&gt;
Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and easily. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
&lt;br /&gt;
High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R] (Hua)&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar] (Hua)&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* Min Chen, David Ebert, Hans Hagen, Robert S. Laramee, Robert van Liere, Kwan-Liu Ma, William Ribarsky, Gerik Scheuermann, Deborah Silver, &amp;quot;Data, Information, and Knowledge in Visualization,&amp;quot; IEEE Computer Graphics and Applications, vol. 29, no. 1, pp. 12-19, Jan./Feb. 2009, doi:10.1109/MCG.2009.6&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.&lt;br /&gt;
(As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=4999</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=4999"/>
		<updated>2011-09-12T16:15:44Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: added potential readings (first assignment)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/iel1/21/4054/00156574.pdf?tp=&amp;amp;arnumber=156574&amp;amp;isnumber=4054 Ecological interface design: Theoretical foundations] Vicente-1992-EID&lt;br /&gt;
: Theory of how interfaces can avoid forcing processing at a higher level than the task requires. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/ftinterface~content=a784403799~fulltext=713240930 The Ecology of Human-Machine Systems II: Mediating Direct Perception in Complex Work Domains] Vicente-2000-EHM&lt;br /&gt;
: Taking advantage of fast perceptual processes to reduce cognitive demands as applied to the design of a thermal-hydraulic system. (Jon) 14:56, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff-1998-AVW.&lt;br /&gt;
: Experiment that has implications for gesture interpretation module development.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1993063 Effects of motor scale, visual scale, and quantization on small target acquisition difficulty] Chapuis-2011-EMS&lt;br /&gt;
: In the first class, we talked about the problem with Windows Start menus  -- how hard it is to navigate and select the right one. This study provides empirical evidence for this problem, confirming the difficulty of acquiring small-sized targets (like the menus) and identifying motor and visual sizes of the targets as limiting factors. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1870080 Do predictions of visual perception aid design?] Rosenholtz-2011-DPV&lt;br /&gt;
: This paper asks the question: does the use cognitive and perceptual models actually help (in this case, the process of design)? They find that &amp;quot;the models can help, but in somewhat unexpected ways&amp;quot;: &amp;quot;&amp;quot;goodness&amp;quot; values were not very useful&amp;quot; but it &amp;quot;seemed to facilitate communication ... about design goals and how to achieve those goals&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hick%27s_law Wikipedia&#039;s Hick&#039;s Law page]&lt;br /&gt;
: Hick&#039;s law describes the relationship between the decision-making time and the number of possible choices. (Hua)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sjdm.org/mail-archive/jdm-society/attachments/20060401/963b50da/attachment-0003.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
:A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Feasibility and pragmatics of classifying working memory load with an electroencephalograph]&lt;br /&gt;
&lt;br /&gt;
:Examines how practical it is to use electroencephelographs to measure cognitive load, and discusses domain-specific knowledge needed.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Dynamic Detection of Novice vs Skilled Use]&lt;br /&gt;
&lt;br /&gt;
:Used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%. This classifier was then used to provide differrent information and feedback to the user as appropriate. &lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Bubble Cursor]&lt;br /&gt;
&lt;br /&gt;
:Example of a paper which demonstrated a novel interaction technique still obeys Fitts&#039;s law.&lt;br /&gt;
&lt;br /&gt;
* [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Sloppy Selection]&lt;br /&gt;
&lt;br /&gt;
:Utilized a quantiative model of user performance which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent. &lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Disruption and Recovery of Computing Tasks]&lt;br /&gt;
&lt;br /&gt;
:Studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption. Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions. This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 A diary study of task switching and interruptions]&lt;br /&gt;
&lt;br /&gt;
:Showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)  The CPM (Critical Path Modeling) aspect used handles the parallel nature of several human components of HCI and seems to very accurately model the low level tasks from this study.  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;small&amp;gt;[http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
&lt;br /&gt;
Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://hfs.sagepub.com/cgi/reprint/44/1/62.pdf Ecological Interface Design: Progress and challenges] Vicente-2002-EID&lt;br /&gt;
: Discusses the implications of Ecological Interface Design (EID), a theoretical HCI framework, for designing human-computer interfaces and compares performance of EID-informed designs to other comtemporary approaches. (Owner: Jon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
&lt;br /&gt;
Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;250-word summary and relevance statement&#039;&#039;&#039; (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.3761&amp;amp;rep=rep1&amp;amp;type=pdf Could I have the Menu Please? An Eye Tracking Study of Design Conventions] McCarthy-2003-CMP&lt;br /&gt;
This article examines eye tracking techniques as they pertain to improving search performance in user interfaces. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG),&lt;br /&gt;
5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
* the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.PDF The Skull beneath the Skin:&lt;br /&gt;
Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/1130000/1125552/p454-akers.pdf?ip=138.16.160.6&amp;amp;CFID=41848857&amp;amp;CFTOKEN=88172531&amp;amp;__acm__=1315781624_3f3ff7dffae48746685278b9f2b7dabb Wizard of Oz for Participatory Design: Inventing a Gestural Interface for 3D Selection of Neural Pathway Estimates], Akers-2006-WOP.&lt;br /&gt;
:Designs an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. The mouse based interface helps neuroscientists to select neural pathways more efficiently and easily. (Chen)&lt;br /&gt;
&lt;br /&gt;
* [http://www.research.ibm.com/AVSTG/icassp_pose.pdf Audio-Visual Intent-To-Speak Detection For Human-Computer Interaction], Cuetos-2000-ISD.&lt;br /&gt;
:Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user&#039;s intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. (Michael)&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1959023 An exploration of relations between visual appeal, trustworthiness and perceived usability of homepages] Lindgaard-2011-ERB&lt;br /&gt;
: This paper is interesting and relevant to cognition+HCI because it attempts to differentiate between &amp;quot;judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness)&amp;quot; and see whether those tasks with more cognitive demand have different results. (The paper includes a model to account for these.)&lt;br /&gt;
Also, this is interesting to Steve and me in the context of some of our discussions this past spring. (Apparently, yes: &amp;quot;all three types of judgments [including, crucially, trustworthiness] are largely driven by visual appeal&amp;quot;.) ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.445&amp;amp;rep=rep1&amp;amp;type=pdf Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the promises] Jacob-2003-ETH&lt;br /&gt;
This paper (book chapter) looks beyond the relevance of eye tracking methodologies to HCI and instead addresses the data produced. It examines various approaches to analysis and the implications and conclusions that can be drawn. Given that eye tracking is often coupled with other inputs, such as a mouse or a keyboard, analysis is rarely clear-cut: other variables, such as error, saccades, and speed must be factored in. Moreover, eye movements are far less deliberate than mechanical (i.e. mouse) input, and so errors must be handled differently. The chapter discusses each of these issues and subsequently offers solutions. In general, the article argues for the importance of eye tracking, considering it as a central component of HCI methodology. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
&lt;br /&gt;
High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=PublicationURL&amp;amp;_tockey=%23TOC%236829%232001%23999449998%23287248%23FLP%23&amp;amp;_cdi=6829&amp;amp;_pubType=J&amp;amp;_auth=y&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=64b44ef6df77ae073d41a7367db866b5 International Journal of Human-Computer Studies - Special Issue on Cognitive Modeling]&lt;br /&gt;
:Articles all concerning various issues of cognitive modeling as relates to HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VDC-4811TNB-5&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=259b5c105222437fc84990b7a1eaedee The role of cognitive theory in human–computer interface].  Chalmers, Patricia A, 2003.&lt;br /&gt;
:Was scared again, but no need to be.  Touches only on a subset of cognitive theories (Schema theory, Cognitive load, and retention theories) and undertakes a survey of some software design theories, but does not attempt an explicit mapping between the two. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=26115.26124 Design Guidelines]. Marshall, Nelson, and Gardiner, 1987.&lt;br /&gt;
:Attempt to apply cognitive psychology to user-interface design.  Here, the opposite problem is seen—the author&#039;s make no significant attempt to take existing heuristic guidelines into account. [[User:E J Kalafarski|E J Kalafarski]] 13:48, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cognitivedesignsolutions.com/ Cognitive Design Solutions, Inc.]&lt;br /&gt;
:Training and consulting firm that claims to take advantage of Cognitive Design in making design and performance improvements. [[User:E J Kalafarski|E J Kalafarski]] 13:50, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://infolab.uvt.nl/research/lap2003/agerfalk.pdf Actability Principles in Theory and Practice]. Agerfalk, Par J, 2003.&lt;br /&gt;
:Presents a set of nine contemporary principles for the evaluation of IT systems (&amp;quot;social tools to perform communicative action&amp;quot;) based explicitly on cognitive principles.  Introduces a notion comparable to usability called &#039;&#039;actability&#039;&#039;.  Presents a mapping for some basic usability principles to some seminal sets of guidelines. [[User:E J Kalafarski|E J Kalafarski]] 14:29, 20 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/ACT-R Wikipedia page for ACT-R] (Hua)&lt;br /&gt;
:ACT-R is a cognitive architecture developed at CMU. It aims to define the basic cognitive and perceputal operations of human mind.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29 Wikipedia page for Soar] (Hua)&lt;br /&gt;
:Soar is another cognitive architecture developed at CMU, now maintained in UMich.&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658355 Computational visual attention systems and their cognitive foundations: A survey] Frintrop-2010-CVA&lt;br /&gt;
: This paper &amp;quot;provides an extensive survey of the grounding psychological and biological research on visual attention as well as the current state of the art of computational systems&amp;quot;. It should make for good background reading if we want to work with visual attention (detecting regions of interest in images). ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.5121&amp;amp;rep=rep1&amp;amp;type=pdf On tangible user interfaces, humans, and spatiality] Sharlin-2004-TUI&lt;br /&gt;
Considers a range of user interfaces, ranging from the ordinary computer mouse to the cognitive cube, and the heuristics that underly their use. The article covers the logic that lies behind tactile user interfaces, with an eye to the cognitive systems and spatial relations involved. (Clara)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
* [http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf A pattern approach to interaction design] Borchers-2001-PAI&lt;br /&gt;
: A highly-cited work on the development of a language for defining design patterns for use in interface development, with an emphasis on communication between application developers and application domain experts. [[User:E J Kalafarski|E J Kalafarski]] 16:37, 3 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008)&lt;br /&gt;
: Work by Jeff Heer of Stanford (formerly Berkley) on using Graphical Interaction Histories within the Tableau InfoVis application.  This is a great recent example of &amp;quot;workflow analysis&amp;quot; that we&#039;ve been discussing in class.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including evaluations from the deployment of his techniques within Tableau. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Gotz-2008-CUV.pdf Characterizing Users&#039; Visual Analytic Activity for Insight Provenance]&lt;br /&gt;
: The authors look into combining user triggered and automatically generated vis. histories.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Isenberg-2008-ESV.pdf An exploratory Study on Visual Information Analysis]&lt;br /&gt;
: The authors run a user study to find the tasks involved in collaborative evidence aggregation&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Robinson-2008-CSV.pdf Collaborative Synthesis of Visual Analytic Results]&lt;br /&gt;
: Same as the previous one.&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.brown.edu/research/vis/docs/pdf/bib/Pirolli-2005-SPL.pdf The sensemaking process and leverage points for technology]&lt;br /&gt;
: The authors propose a model of analysis and identify leverage points for visualization.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* Min Chen, David Ebert, Hans Hagen, Robert S. Laramee, Robert van Liere, Kwan-Liu Ma, William Ribarsky, Gerik Scheuermann, Deborah Silver, &amp;quot;Data, Information, and Knowledge in Visualization,&amp;quot; IEEE Computer Graphics and Applications, vol. 29, no. 1, pp. 12-19, Jan./Feb. 2009, doi:10.1109/MCG.2009.6&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1658353 Volume composition and evaluation using eye-tracking data] Lu-2010-VCE&lt;br /&gt;
: Brain data sets are huge, and rendering all the information they contain (at the same time) is almost impossible. To deal with this problem, we could use the approach proposed in this paper (with different data): choosing rendering parameters based on where the user&#039;s attention is focused, using eye tracking data to determine that. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773971 Neural modeling of flow rendering effectiveness]&lt;br /&gt;
: This paper provides a comparison and discussion of &amp;quot;the relative strengths of different flow visualization methods for the task of visualizing advection pathways&amp;quot;. This could be useful in selecting visualization methods for the brain circuits software.&lt;br /&gt;
(As an added bonus, they cite Laidlaw et al.&#039;s &amp;quot;advection task&amp;quot; right there in the abstract.) ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1773970 Evaluating 2D and 3D visualizations of spatiotemporal information] Kjellin-2008-E23&lt;br /&gt;
: A frequent topic of interest to brain scientists is longitudinal data: how does the brain change over time? If the brain circuits software were to support answering this kind of question, we might evaluate different approaches to visualization using the methods in this paper. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Evaluation and Metrics==&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: towards a standardized survey metric for creativity support] Carroll-2009-CFE&lt;br /&gt;
: One alternative to evaluating visualizations and other tools based on the amount of time they save is evaluating them based on how much they help creativity. This paper presents a survey metric for creativity support tools. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1970381 Measuring multitasking behavior with activity-based metrics] Bebunan-Fich-2011-MMB&lt;br /&gt;
: When designing interfaces for scientists, we must be mindful of the fact that (like all users) they will be multitasking -- both in terms of cognitive tasks (drawing from multiple sources, evaluating different hypotheses, etc.) and (if the interface allows it) tasks within the software. This paper proposes a definition of multitasking and provides a set of metrics for (computer-based) multitasking. ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?id=1879836 Parallel prototyping leads to better design results, more divergence, and increased self-efficacy] Dow-2010-PPL&lt;br /&gt;
: Not too relevant to the cognition aspects of the proposals, but provides some empirical support for &amp;quot;fast iteration&amp;quot; and related software design techniques, whose virtues are extolled in the proposals. The lesson: if we&#039;re going to prototype something for this class, and we want &amp;quot;better design results, more divergence, and increased self-efficacy&amp;quot;, we should do it in parallel! ([[User:Nathan Malkin|Nathan Malkin]] 12:15, 12 September 2011 (EDT))&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=4969</id>
		<title>Plans and Goals</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=4969"/>
		<updated>2011-09-09T03:09:33Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Nathan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page the members of the [[VRL]] record and refine their goals for the current semester.  This is a living document in which [[dhl]] will provide feedback.  See the bottom of the page for links to past plans &amp;amp; goals documents.&lt;br /&gt;
&lt;br /&gt;
== Current Schedule ==&lt;br /&gt;
Meetings are on Tuesdays.  The authoritative list is in dhl&#039;s calendar. (updated here 9/24/10)&lt;br /&gt;
&lt;br /&gt;
== Current Plans and Goals (Summer-Fall &#039;11) ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [[User:Brad Berg|Brad]] ===&lt;br /&gt;
&lt;br /&gt;
Projects&lt;br /&gt;
&lt;br /&gt;
* Support the ongoing VRL Group projects:&lt;br /&gt;
** Jadrian&#039;s brain research.&lt;br /&gt;
** Wenjin&#039;s Axon estimation project.&lt;br /&gt;
** Radu&#039;s Protein project:&amp;lt;br&amp;gt;&lt;br /&gt;
::Assist Radu to integrate a test and any recent code.&lt;br /&gt;
&lt;br /&gt;
* Support collaborative projects.&lt;br /&gt;
** Revised wrist registration.&lt;br /&gt;
&lt;br /&gt;
* Contribute to the design and development of the new cave.  Integrate changes to vrg3d. Develop an API for new devices.  Test and support vrg3d in the new cave.&lt;br /&gt;
&lt;br /&gt;
* Add to the public distribution of diffusion imaging data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Infrastructure&lt;br /&gt;
&lt;br /&gt;
* Continue to write Wiki entries as needed.&lt;br /&gt;
&lt;br /&gt;
* Continue setting up third party software for students to use. &lt;br /&gt;
&lt;br /&gt;
* Extend the build system to make kits for public source and binary distributions.&lt;br /&gt;
&lt;br /&gt;
* Continue extend the make files to support new features such as additional third party packages.&lt;br /&gt;
&lt;br /&gt;
* Continue to integrate any new tests written by students into nightly test runs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Cagatay Demiralp|Çağatay]] === &lt;br /&gt;
&lt;br /&gt;
# Give a good proposal talk. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Caroline Ziemkiewicz|Caroline]] ===&lt;br /&gt;
Fall Goals&lt;br /&gt;
* Apply for jobs!&lt;br /&gt;
* Make a good showing at VisWeek. Find some outside references for job search.&lt;br /&gt;
* Submit a small IIS grant proposal. &lt;br /&gt;
* Submit a CHI paper or note on CBDM data. &lt;br /&gt;
* Publish a Vis Viewpoint on personality research.&lt;br /&gt;
* Prepare publications for EuroVis and VisWeek. &lt;br /&gt;
* Keep working on research ideas with Steve &amp;amp; Radu. &lt;br /&gt;
* Contribute to CS295J. Get research and teaching ideas.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== David ===&lt;br /&gt;
#get diffusion normals, one library, and one executable distributed (late...)&lt;br /&gt;
#get one diffusion analysis completed and out (Paul? Cohen? hire?)&lt;br /&gt;
#visweek papers&lt;br /&gt;
#*Steve: comps (need title, brain diagrams and events expanded?)&lt;br /&gt;
#*Cagatay: coloring manifesto&lt;br /&gt;
#*Cagatay: metric learning via interaction (?)&lt;br /&gt;
#*Cagatay: cycles of brains (?)&lt;br /&gt;
#*Radu: Analysis nudges: guiding scientists towards improved analytic practices&lt;br /&gt;
#*Caroline: A Task Analysis of Reasoning with Multi-View Visualization (is vis appropriate?)&lt;br /&gt;
#*Ryan: get involved with Cagatay&#039;s?  Cohen analysis (maybe not vis)?&lt;br /&gt;
#*Wenjin: too far from Vis?&lt;br /&gt;
#*Jadrian: modeling curve bundles? small problem results?&lt;br /&gt;
#support 7 phd students&#039;  progress toward graduation and careers&lt;br /&gt;
#new Cave creation on track&lt;br /&gt;
#more dissemination: talks/slides?, datasets (animals?)?, more/better images (&amp;gt;2004..., analytics)?&lt;br /&gt;
#funded grants happy: nih (ext?), immgen, nmrkrs, sa, aptima, jian&lt;br /&gt;
#si^2 and expedition proposal followthrough?&lt;br /&gt;
#teach a good CS16&lt;br /&gt;
#keep Vis &#039;11 on track&lt;br /&gt;
#cs facilities vision document&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Eni ===&lt;br /&gt;
&lt;br /&gt;
* Make progress in the surface analysis/congruency problem &lt;br /&gt;
* Draft a proposal for my work &lt;br /&gt;
* Read more literature that’s related to my work&lt;br /&gt;
* Submit a paper to MRM on tracts &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Jadrian Miles|Jadrian]] ===&lt;br /&gt;
&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal original 2009-06-22.pdf‎|Initial thesis proposal, submitted 2009-06-22]]&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal rev2 2009-10-15.pdf|Proposal second revision, prepared 2009-10-15]]&lt;br /&gt;
&lt;br /&gt;
# Write up dissertation table of contents, outline, and some hole-filled chapters by July 31st&lt;br /&gt;
#* Done.&lt;br /&gt;
# Write up departmental proposal document by September 1st&lt;br /&gt;
#* Initial submission on September 10th.&lt;br /&gt;
# Present the proposal formally to the department in October&lt;br /&gt;
#* Proposal presentation on October 7th.&lt;br /&gt;
# Define a macrostructure model and cost function&lt;br /&gt;
#* The committee has been very involved in refining the idea of the &amp;quot;cost functions&amp;quot; for various fitting steps.  The macrostructure model has gone through several revisions and is almost done.&lt;br /&gt;
# Implement curve clustering using the macrostructure model; write up for ISMRM&lt;br /&gt;
#* {{red|Not done.}}  Intended ISMRM submission on Rician noise correction turned out not to be novel, though I did learn about a ton of related work that has been quite successful.&lt;br /&gt;
# Extend the macrostructure cost function to include DWI values&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Implement macrostructure adjustment from DWIs; write up for an MRM paper&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Get preliminary histological data of some kind, either by doing it in-house, working with collaborators, or outsourcing&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Derive a minimal set of microstructure measurements from histological data&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Copy South Africa data from DVDs and work with Amanda to import and process it&lt;br /&gt;
#* All data are organized on the data server but not processed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Nathan Malkin|Nathan]] ===&lt;br /&gt;
* Independent research&lt;br /&gt;
*# Develop platform for cooperative experiments using Mechanical Turk&lt;br /&gt;
*#* (Mostly) done&lt;br /&gt;
*# Obtain IRB approval&lt;br /&gt;
*## &amp;lt;s&amp;gt;Draft protocol&amp;lt;/s&amp;gt;&lt;br /&gt;
*## &#039;&#039;&#039;Edit/revise protocol with David&#039;&#039;&#039;&lt;br /&gt;
*## Submit protocol&lt;br /&gt;
*## Get approval!&lt;br /&gt;
*# Using games from experimental economics (Prisoner&#039;s Dilemma, Trust Game, Ultimatum Game), attempt to replicate some of their findings&lt;br /&gt;
*# Test the effect of the incentive level on people&#039;s behavior&lt;br /&gt;
*# Test the effects various manifestations of online identity have on behavior&lt;br /&gt;
*#* Attempt to replicate findings with faces increasing other-regarding behavior&lt;br /&gt;
*#* Repeat experiment with avatars&lt;br /&gt;
*#* Also names and nicknames&lt;br /&gt;
*# Test effects of different interface elements&lt;br /&gt;
*#* Colors, borders, overall &amp;quot;prettiness&amp;quot;, ...&lt;br /&gt;
*# Test effects of priming&lt;br /&gt;
* Work with David, Win, and Ryan to provide metrics for brain area research&lt;br /&gt;
* (Ongoing) Provide support and documentation for previously-written tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Radu === &lt;br /&gt;
&lt;br /&gt;
# Vis Nudge Poster submission (end June)&lt;br /&gt;
# Submission of maps paper to CG&amp;amp;A (early July)&lt;br /&gt;
# Submission of nudges paper to CHI(?) (early September)&lt;br /&gt;
# Immgen: deploy 2D embedding map and module map (end August)&lt;br /&gt;
# Immgen: create the deployment infrastructure for the existing maps (end August)&lt;br /&gt;
# First draft of dissertation (end August)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Wenjin ===&lt;br /&gt;
&lt;br /&gt;
* Neuroimage paper - submit asap even just with simulation results. &lt;br /&gt;
* Axon application on real data: human/pig/macaque&lt;br /&gt;
* IACUC - work with David on submission&lt;br /&gt;
* IRB - work with David and Ed on submission&lt;br /&gt;
* CV ready for job search in fall&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Steven Gomez|Steve]] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Fall&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Work-in-progress&lt;br /&gt;
** Prepare Diagrams work as 4-pg short paper for &#039;&#039;&#039;[HARD] CHI -- 9/23&#039;&#039;&#039; (draft by 9/1)&lt;br /&gt;
*** This may not be submitted if enough pieces aren&#039;t finished or evaluated; plan for TVCG submission (or hold-off for Vis &#039;12) by &#039;&#039;&#039;12/31&#039;&#039;&#039;&lt;br /&gt;
*** Continue work on Diagrams project; reconnect with Schnitzer group and interested collaborators at Brown&lt;br /&gt;
** Prepare and submit Tome paper for &#039;&#039;&#039;[HARD] CHI -- 9/23&#039;&#039;&#039; (draft by 9/8)&lt;br /&gt;
** Make vis poster and &amp;quot;fast forward&amp;quot; slide -- &#039;&#039;&#039;[HARD] 9/30&#039;&#039;&#039;&lt;br /&gt;
* Build or get Testbed environment for parameterized visualization -- &#039;&#039;&#039;9/30&#039;&#039;&#039;&lt;br /&gt;
** Mentioned in Jian&#039;s grant, but I also want to use it for crowdsourcing experiments (e.g., Can we build a human-centered visualization &amp;quot;editing&amp;quot; framework that is the graphics equivalent of the [http://dl.acm.org/citation.cfm?id=1866078 Soylent paper from UIST &#039;10]?)&lt;br /&gt;
** Sketch of that MTurk project in [http://vrl.cs.brown.edu/wiki/Nascent_Papers#Steve Nascent Papers] by 9/1&lt;br /&gt;
* NIH-style proposal first draft to DHL by &#039;&#039;&#039;December&#039;&#039;&#039;&lt;br /&gt;
** &amp;quot;Vision&amp;quot; by 9/15&lt;br /&gt;
** &amp;quot;Significance&amp;quot; and &amp;quot;Contributions&amp;quot; by 9/30&lt;br /&gt;
** &amp;quot;Research Plan&amp;quot; by 10/31&lt;br /&gt;
* Coursework&lt;br /&gt;
** DHL&#039;s class, maybe &amp;quot;Visually Guided Behavior&amp;quot; (Joo-Hyun Song) or &amp;quot;Computational Vision&amp;quot; (Thomas Serre) in CLPS&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Summer&#039;&#039;&#039;&lt;br /&gt;
* Project planning&lt;br /&gt;
** &amp;lt;s&amp;gt;Model visual scan behavior / learn &#039;interesting&#039; windows of a vis canvas, sketch by 5/27&amp;lt;/s&amp;gt; see &#039;Nascent Papers&#039;&lt;br /&gt;
** &amp;lt;s&amp;gt;Trust idea, kill or revise by 5/24&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &#039;&#039;Extension of Technology acceptance model (TAM) for visual analytics -- What are the factors: trust, usability, usefulness?&#039;&#039;&lt;br /&gt;
* &amp;lt;s&amp;gt;Review rounds for InfoVis paper submission - if rejected, see about poster? - 6/27&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Vis Poster submission - 6/27&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;quot;Plan for a plan&amp;quot; (6/1) in order to submit DHL proposal draft by Dec.&lt;br /&gt;
&lt;br /&gt;
Jian-DHL &amp;quot;Scientific Vis Language&amp;quot; Grant&lt;br /&gt;
* &amp;lt;s&amp;gt;Get status update of preliminary/current progress&amp;lt;/s&amp;gt;&lt;br /&gt;
** &#039;&#039;What are the highest priority activities to get this moving?  e.g. task space analysis?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;s&amp;gt;List of papers to write - draft by 5/24&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Gather domain data (DTI, bioflow?)&amp;lt;/s&amp;gt; Is bioflow a part of Jian&#039;s project any more?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Ryan ===&lt;br /&gt;
&lt;br /&gt;
* Comps project implementation&lt;br /&gt;
** File formats&lt;br /&gt;
** Stuctural/diffusion Image registration&lt;br /&gt;
** Track-surface intersection&lt;br /&gt;
** TOI shape metrics&lt;br /&gt;
** Surface shape metrics&lt;br /&gt;
* Maintain contact with comps collaborators&lt;br /&gt;
* Work on group diffusion pipelines in progress&lt;br /&gt;
* Explore possibilities for wrist shape analysis&lt;br /&gt;
* Submit MRI IRB Renewal&lt;br /&gt;
&lt;br /&gt;
== Past Plans and Goals ==&lt;br /&gt;
* [[/Spring 2011|Spring &#039;11]]&lt;br /&gt;
* [[/Summer-Fall 2010|Summer-Fall &#039;10]]&lt;br /&gt;
* [[/Spring 2010|Spring &#039;10]]&lt;br /&gt;
* [[/Fall 2009|Fall &#039;09]]&lt;br /&gt;
* [[/Summer 2009|Summer &#039;09]]&lt;br /&gt;
* [[/Spring 2009|Spring &#039;09]]&lt;br /&gt;
* [[/Fall 2008|Fall &#039;08]]&lt;br /&gt;
* [http://sites.google.com/a/vis.cs.brown.edu/collaboravis/Home/summer-08-group-goals Summer &#039;08]&lt;br /&gt;
&lt;br /&gt;
[[Category:VRL]]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/How_Tos&amp;diff=4968</id>
		<title>CS295J/How Tos</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/How_Tos&amp;diff=4968"/>
		<updated>2011-09-09T00:36:38Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Add to a reading list */ fixed link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Add to a reading list==&lt;br /&gt;
This is done at several levels.  These levels can be completed at different times.  &lt;br /&gt;
&lt;br /&gt;
:# edit the wiki page [[CS295J/Literature]] (or a related (sub)page) and including a reasonably complete citation to the work.  Also create a key for the item.  The key should be constructed from the first author&#039;s last name, the year of publication, and a three-letter-acronym for the first three real words in the title; these three element should be connected with dashes, e.g., Laidlaw-1998-XYZ.  &lt;br /&gt;
:# get a pdf or a reliable link to a pdf and point to it.  &lt;br /&gt;
:# add a summary evaluation about the paper under the citation.  This summary should only indicate how relevant the paper is to our research project goals.  Put your name at the end of the evaluation.&lt;br /&gt;
:# if the relevance is high, create a new page for describing the relationship of the item to our research.  This page should be named using the key created above.&lt;br /&gt;
&lt;br /&gt;
Here is an example:&lt;br /&gt;
&lt;br /&gt;
:Laidlaw-2007-QTI David H. Laidlaw, Stephanie Lee, Stephen Correia, David Tate, Robert Paul, Song Zhang, Steven Salloway, and Paul Malloy. Quantitative Tract of Interest Metrics for White Matter Integrity based on Diffusion Tensor MRI Data. Views Radiology, 8(4):2-4, 2007.&lt;br /&gt;
&lt;br /&gt;
:: No relevance to our project, unless we can use tract-based measures of users to calibrate user interfaces (seems unlikely, although an intriguing future direction. (David)&lt;br /&gt;
&lt;br /&gt;
[http://upload.wikimedia.org/wikipedia/meta/6/66/MediaWikiRefCard.pdf |&#039;&#039;&#039;Get a wiki reference card&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
==Read protected papers from a remote location==&lt;br /&gt;
&lt;br /&gt;
Many papers can be accessed while using one of Brown&#039;s networks, but cannot be accessed from a remote location. There are two ways around this: &lt;br /&gt;
# Use a standard VPN client:&lt;br /&gt;
## [https://wiki.brown.edu/confluence/display/CISDOC/VPN+-+Using+Brown%27s+Virtual+Private+Network+with+Windows VPN with Windows]&lt;br /&gt;
## [https://wiki.brown.edu/confluence/display/CISDOC/VPN+-+Using+Brown%27s+Virtual+Private+Network+with+Mac+OS+X VPN with Mac OSX]&lt;br /&gt;
# Use [https://wiki.brown.edu/confluence/display/CISDOC/VPN+-+WebVPN+and+Library+Eresources WebVPN] to access campus-based web services such as the Brown library resources.&lt;br /&gt;
&lt;br /&gt;
For general information, see the Brown CIS VPN [https://wiki.brown.edu/confluence/display/CISDOC/VPN+%28Virtual+Private+Network%29+-+FAQ FAQ].&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4967</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4967"/>
		<updated>2011-09-09T00:33:32Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Installation */ updating location of curvecollection library&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/mri/curvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;mri/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;mri/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;mri/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;Curve*&amp;gt; const &amp;amp; getCurves() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a reference to the vector of pointers to the curves that comprise this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overloaded random access operators for CurveCollection return a reference to the curve at the specified index.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage examples: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve&amp;amp; firstCurve = aCurveCollection[0];&amp;lt;br /&amp;gt;Curve&amp;amp; lastCurve = aCurveCollection[aCurveCollection.size() - 1];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineCurveProperty(std::string const &amp;amp;, std::vector&amp;lt;double&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the curves as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;n&#039;&#039;th property from the given vector. If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getCurvePropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineVertexProperty(std::string const &amp;amp;, std::vector&amp;lt;std::vector&amp;lt;double&amp;gt; &amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the vertices as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;k&#039;&#039;th vertex in the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;k&#039;&#039;th property from the &#039;&#039;n&#039;&#039;th vector.  If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getVertexPropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of curves in this collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=4837</id>
		<title>Plans and Goals</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=4837"/>
		<updated>2011-06-14T18:24:31Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Nathan */ summer goals&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page the members of the [[VRL]] record and refine their goals for the current semester.  This is a living document in which [[dhl]] will provide feedback.  See the bottom of the page for links to past plans &amp;amp; goals documents.&lt;br /&gt;
&lt;br /&gt;
== Current Schedule ==&lt;br /&gt;
Meetings are on Tuesdays.  The authoritative list is in dhl&#039;s calendar. (updated here 9/24/10)&lt;br /&gt;
&lt;br /&gt;
* 2:30 - Dawn&lt;br /&gt;
* 2:50 - Brad&lt;br /&gt;
* 3:00 - Radu&lt;br /&gt;
* 3:10 - Steve G&lt;br /&gt;
* 3:20 - Nathan&lt;br /&gt;
* 3:30 - Jadrian&lt;br /&gt;
* 3:40 - Caroline&lt;br /&gt;
* 3:50 - Ryan&lt;br /&gt;
* 4:00 - Wenjin&lt;br /&gt;
&lt;br /&gt;
== Current Plans and Goals (Summer-Fall &#039;11) ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [[User:Brad Berg|Brad]] ===&lt;br /&gt;
&lt;br /&gt;
Projects&lt;br /&gt;
&lt;br /&gt;
* Support the ongoing VRL Group projects:&lt;br /&gt;
** Jadrian&#039;s brain research.&lt;br /&gt;
** Wenjin&#039;s Axon estimation project.&lt;br /&gt;
** Radu&#039;s Protein project:&amp;lt;br&amp;gt;&lt;br /&gt;
::Assist Radu to integrate a test and any recent code.&lt;br /&gt;
&lt;br /&gt;
* Support collaborative projects.&lt;br /&gt;
** Revised wrist registration.&lt;br /&gt;
&lt;br /&gt;
* Contribute to the design and development of the new cave.  Integrate changes to vrg3d. Develop an API for new devices.  Test and support vrg3d in the new cave.&lt;br /&gt;
&lt;br /&gt;
* Add to the public distribution of diffusion imaging data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Infrastructure&lt;br /&gt;
&lt;br /&gt;
* Continue to write Wiki entries as needed.&lt;br /&gt;
&lt;br /&gt;
* Continue setting up third party software for students to use. &lt;br /&gt;
&lt;br /&gt;
* Extend the build system to make kits for public source and binary distributions.&lt;br /&gt;
&lt;br /&gt;
* Continue extend the make files to support new features such as additional third party packages.&lt;br /&gt;
&lt;br /&gt;
* Continue to integrate any new tests written by students into nightly test runs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Cagatay Demiralp|Çağatay]] === &lt;br /&gt;
&lt;br /&gt;
# Give a good proposal talk. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Caroline Ziemkiewicz|Caroline]] ===&lt;br /&gt;
Summer Goals&lt;br /&gt;
* Do a closer analysis of selected segments from the CBDM data. &lt;br /&gt;
* Write up any results as an abstract; consider visweek poster.&lt;br /&gt;
* Set up a more reliable system for task data coding.&lt;br /&gt;
* Plan next stage of task modeling research.&lt;br /&gt;
* Stay involved in Steve and Nathan&#039;s trust projects.&lt;br /&gt;
* Keep in touch with Ynes from the Salomon lab for more data.&lt;br /&gt;
* Work on statements for job search.&lt;br /&gt;
&lt;br /&gt;
Broader Goals&lt;br /&gt;
* Submit results to two or more major conferences (CHI, EuroVis, VisWeek)&lt;br /&gt;
* Submit more grant proposals; try to have some funding on hand during job search&lt;br /&gt;
* Successfully merge earlier theory work with modeling for more solid research foundation going forward&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== David ===&lt;br /&gt;
#get diffusion normals, one library, and one executable distributed (late...)&lt;br /&gt;
#get one diffusion analysis completed and out (Paul? Cohen? hire?)&lt;br /&gt;
#visweek papers&lt;br /&gt;
#*Steve: comps (need title, brain diagrams and events expanded?)&lt;br /&gt;
#*Cagatay: coloring manifesto&lt;br /&gt;
#*Cagatay: metric learning via interaction (?)&lt;br /&gt;
#*Cagatay: cycles of brains (?)&lt;br /&gt;
#*Radu: Analysis nudges: guiding scientists towards improved analytic practices&lt;br /&gt;
#*Caroline: A Task Analysis of Reasoning with Multi-View Visualization (is vis appropriate?)&lt;br /&gt;
#*Ryan: get involved with Cagatay&#039;s?  Cohen analysis (maybe not vis)?&lt;br /&gt;
#*Wenjin: too far from Vis?&lt;br /&gt;
#*Jadrian: modeling curve bundles? small problem results?&lt;br /&gt;
#support 7 phd students&#039;  progress toward graduation and careers&lt;br /&gt;
#new Cave creation on track&lt;br /&gt;
#more dissemination: talks/slides?, datasets (animals?)?, more/better images (&amp;gt;2004..., analytics)?&lt;br /&gt;
#funded grants happy: nih (ext?), immgen, nmrkrs, sa, aptima, jian&lt;br /&gt;
#si^2 and expedition proposal followthrough?&lt;br /&gt;
#teach a good CS16&lt;br /&gt;
#keep Vis &#039;11 on track&lt;br /&gt;
#cs facilities vision document&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Eni ===&lt;br /&gt;
&lt;br /&gt;
* Make progress in the surface analysis/congruency problem &lt;br /&gt;
* Draft a proposal for my work &lt;br /&gt;
* Read more literature that’s related to my work&lt;br /&gt;
* Submit a paper to MRM on tracts &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Jadrian Miles|Jadrian]] ===&lt;br /&gt;
&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal original 2009-06-22.pdf‎|Initial thesis proposal, submitted 2009-06-22]]&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal rev2 2009-10-15.pdf|Proposal second revision, prepared 2009-10-15]]&lt;br /&gt;
&lt;br /&gt;
# Write up dissertation table of contents, outline, and some hole-filled chapters by July 31st&lt;br /&gt;
#* Done.&lt;br /&gt;
# Write up departmental proposal document by September 1st&lt;br /&gt;
#* Initial submission on September 10th.&lt;br /&gt;
# Present the proposal formally to the department in October&lt;br /&gt;
#* Proposal presentation on October 7th.&lt;br /&gt;
# Define a macrostructure model and cost function&lt;br /&gt;
#* The committee has been very involved in refining the idea of the &amp;quot;cost functions&amp;quot; for various fitting steps.  The macrostructure model has gone through several revisions and is almost done.&lt;br /&gt;
# Implement curve clustering using the macrostructure model; write up for ISMRM&lt;br /&gt;
#* {{red|Not done.}}  Intended ISMRM submission on Rician noise correction turned out not to be novel, though I did learn about a ton of related work that has been quite successful.&lt;br /&gt;
# Extend the macrostructure cost function to include DWI values&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Implement macrostructure adjustment from DWIs; write up for an MRM paper&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Get preliminary histological data of some kind, either by doing it in-house, working with collaborators, or outsourcing&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Derive a minimal set of microstructure measurements from histological data&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Copy South Africa data from DVDs and work with Amanda to import and process it&lt;br /&gt;
#* All data are organized on the data server but not processed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Nathan Malkin|Nathan]] ===&lt;br /&gt;
* Independent research&lt;br /&gt;
*# Develop platform for cooperative experiments using Mechanical Turk&lt;br /&gt;
*# Using games from experimental economics (Prisoner&#039;s Dilemma, Trust Game, Ultimatum Game), attempt to replicate some of their findings&lt;br /&gt;
*# Test the effect of the incentive level on people&#039;s behavior&lt;br /&gt;
*# Test the effects various manifestations of online identity have on behavior&lt;br /&gt;
*#* Attempt to replicate findings with faces increasing other-regarding behavior&lt;br /&gt;
*#* Repeat experiment with avatars&lt;br /&gt;
*#* Also names and nicknames&lt;br /&gt;
*# Test effects of different interface elements&lt;br /&gt;
*#* Colors, borders, overall &amp;quot;prettiness&amp;quot;, ...&lt;br /&gt;
*# Test effects of priming&lt;br /&gt;
* Work with David, Win, and Ryan to provide metrics for brain area research&lt;br /&gt;
* (Ongoing) Provide support and documentation for previously-written tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Radu === &lt;br /&gt;
&lt;br /&gt;
# Learning distance metrics from users (with Cagatay)&lt;br /&gt;
# Revise/submit neural maps paper&lt;br /&gt;
# Revise/resubmit google maps paper&lt;br /&gt;
# Run analysis nudges study and submit Vis paper&lt;br /&gt;
# Deploy at least one more Immgen map on the Immgen website&lt;br /&gt;
# Create the deployment infrastructure for the existing maps&lt;br /&gt;
# Make my PhD code reusable (installable, automatic testing, partially commented) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Wenjin ===&lt;br /&gt;
&lt;br /&gt;
* MICCAI paper submission? due 3/9/2011&lt;br /&gt;
* NeuroImage journal submission&lt;br /&gt;
* Get real brain data on double-PGSE acquisition (work with Ed)&lt;br /&gt;
* good CONNECT meeting ( MRI of brain microstructure and connectivity) talk in Feb&lt;br /&gt;
* Ponder on disease applications&lt;br /&gt;
* Long-term research goals and plans: research plan, statement of interest, teaching statement, etc.&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Steven Gomez|Steve]] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Summer&#039;&#039;&#039;&lt;br /&gt;
* Project planning&lt;br /&gt;
** Model visual scan behavior / learn &#039;interesting&#039; windows of a vis canvas, sketch by 5/27&lt;br /&gt;
** Trust idea, kill or revise by 5/24&lt;br /&gt;
*** &#039;&#039;Extension of Technology acceptance model (TAM) for visual analytics -- What are the factors: trust, usability, usefulness?&#039;&#039;&lt;br /&gt;
* Review rounds for InfoVis paper submission - if rejected, see about poster? - 6/27&lt;br /&gt;
* Vis Poster submission - 6/27&lt;br /&gt;
* &amp;quot;Plan for a plan&amp;quot; (6/1) in order to submit DHL proposal draft by Dec.&lt;br /&gt;
&lt;br /&gt;
Jian-DHL &amp;quot;Scientific Vis Language&amp;quot; Grant&lt;br /&gt;
* Get status update of preliminary/current progress&lt;br /&gt;
** &#039;&#039;What are the highest priority activities to get this moving?  e.g. task space analysis?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* List of papers to write - draft by 5/24&lt;br /&gt;
* Gather domain data (DTI, bioflow?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Ryan ===&lt;br /&gt;
&lt;br /&gt;
* Comps project implementation&lt;br /&gt;
** File formats&lt;br /&gt;
** Stuctural/diffusion Image registration&lt;br /&gt;
** Track-surface intersection&lt;br /&gt;
** TOI shape metrics&lt;br /&gt;
** Surface shape metrics&lt;br /&gt;
* Maintain contact with comps collaborators&lt;br /&gt;
* Work on group diffusion pipelines in progress&lt;br /&gt;
* Explore possibilities for wrist shape analysis&lt;br /&gt;
* Submit MRI IRB Renewal&lt;br /&gt;
&lt;br /&gt;
== Past Plans and Goals ==&lt;br /&gt;
* [[/Spring 2011|Spring &#039;11]]&lt;br /&gt;
* [[/Summer-Fall 2010|Summer-Fall &#039;10]]&lt;br /&gt;
* [[/Spring 2010|Spring &#039;10]]&lt;br /&gt;
* [[/Fall 2009|Fall &#039;09]]&lt;br /&gt;
* [[/Summer 2009|Summer &#039;09]]&lt;br /&gt;
* [[/Spring 2009|Spring &#039;09]]&lt;br /&gt;
* [[/Fall 2008|Fall &#039;08]]&lt;br /&gt;
* [http://sites.google.com/a/vis.cs.brown.edu/collaboravis/Home/summer-08-group-goals Summer &#039;08]&lt;br /&gt;
&lt;br /&gt;
[[Category:VRL]]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=4700</id>
		<title>Plans and Goals</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Plans_and_Goals&amp;diff=4700"/>
		<updated>2010-12-25T20:24:20Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Nathan */ goals for spring 2011&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page the members of the [[VRL]] record and refine their goals for the current semester.  This is a living document in which [[dhl]] will provide feedback.  See the bottom of the page for links to past plans &amp;amp; goals documents.&lt;br /&gt;
&lt;br /&gt;
== Current Schedule ==&lt;br /&gt;
Meetings are on Tuesdays.  The authoritative list is in dhl&#039;s calendar. (updated here 9/24/10)&lt;br /&gt;
&lt;br /&gt;
* 2:30 - Dawn&lt;br /&gt;
* 2:50 - Brad&lt;br /&gt;
* 3:00 - Radu&lt;br /&gt;
* 3:10 - Steve G&lt;br /&gt;
* 3:20 - Nathan&lt;br /&gt;
* 3:30 - Jadrian&lt;br /&gt;
* 3:40 - Caroline&lt;br /&gt;
* 3:50 - Ryan&lt;br /&gt;
* 4:00 - Wenjin&lt;br /&gt;
&lt;br /&gt;
== Current Plans and Goals (Spring &#039;11) ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [[User:Brad Berg|Brad]] ===&lt;br /&gt;
&lt;br /&gt;
Projects&lt;br /&gt;
&lt;br /&gt;
* Support the ongoing VRL Group projects:&lt;br /&gt;
** Jadrian&#039;s brain research.&lt;br /&gt;
** Radu&#039;s Protein project:&amp;lt;br&amp;gt;&lt;br /&gt;
::Assist Radu to integrate a test and any recent code.&lt;br /&gt;
&lt;br /&gt;
* Support collaborative projects.&lt;br /&gt;
** Wrist cartilage support for Eni and Michael.&lt;br /&gt;
** Migrate Win&#039;s mri pipeline to run in CIT.&lt;br /&gt;
** Andy Loomis and Andy Forsberg&#039;s Adviser program.&lt;br /&gt;
&lt;br /&gt;
* Contribute to the design and development of the new cave.  Integrate changes to vrg3d.  Simplify the configuration and provide a quick start path for student projects.&lt;br /&gt;
&lt;br /&gt;
* Set up public distribution of diffusion imaging data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Infrastructure&lt;br /&gt;
&lt;br /&gt;
* Manage the migration to the new file system.&lt;br /&gt;
&lt;br /&gt;
* Continue to write Wiki entries as needed.&lt;br /&gt;
&lt;br /&gt;
* Continue setting up third party software for students to use. Current imported packages are:  blitz, cg, g3d, qt, camino, dtk, afni, nifti, vivro, vrpn, glut, and sdl.&lt;br /&gt;
&lt;br /&gt;
* Extend the build system to make kits for public source and binary distributions.&lt;br /&gt;
&lt;br /&gt;
* Improve support for remote development.&lt;br /&gt;
&lt;br /&gt;
* Launch the Windows test system.&lt;br /&gt;
&lt;br /&gt;
* Continue extend the make files to support new features such as additional third party packages.&lt;br /&gt;
&lt;br /&gt;
* Upgrade migrated Linux code to work with gcc 4.3 in conjunction with the summer Debian upgrade.&lt;br /&gt;
&lt;br /&gt;
* Continue to integrate any new tests written by students into nightly test runs.&lt;br /&gt;
&lt;br /&gt;
* Finish integrating the new version of the Nag libraries.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowledge&lt;br /&gt;
&lt;br /&gt;
* Learn more about the cave; particularly vrg3d software development.&lt;br /&gt;
&lt;br /&gt;
* Meet (SciVis talk) with students to address any integration issues.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Cagatay Demiralp|Çağatay]] === &lt;br /&gt;
&lt;br /&gt;
# Resolve crossings w/ Geman and Erik (submit the results to cvpr by 11/11)&lt;br /&gt;
# Count cycles w/ Mumford&lt;br /&gt;
# Submit the ToG paper and explore the follow up ideas w/ Gabriel &lt;br /&gt;
# Submit the optimization on mesh graphs (to where? )&lt;br /&gt;
# Finalize the revision of the 2d map paper&lt;br /&gt;
# Solidify the ideas on metric learning through interaction  &lt;br /&gt;
# Minimize the ta&#039;ship workload of c. topology class by preparing course material aeap.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Caroline Ziemkiewicz|Caroline]] ===&lt;br /&gt;
&lt;br /&gt;
Explore cognitive modeling for assessing multiview visualization interfaces&lt;br /&gt;
* Literature review: guidelines for multiview design, evaluations of multiview visualization, cognitive science perspective on problem solving with multiple perspectives etc.&lt;br /&gt;
* Identify an application area (brain circuits, proteins, etc.)&lt;br /&gt;
* Identify tasks for user study / modeling&lt;br /&gt;
* Identify ways to define the similarity of models between two different visual representations&lt;br /&gt;
* Build a model of multiview context-switching given a certain type of task, to predict response times (and accuracy?)&lt;br /&gt;
* Test the model&#039;s predictions against a user study &lt;br /&gt;
* Aim for VisWeek publication?&lt;br /&gt;
&lt;br /&gt;
Identify potential modeling projects based on visual structure for future work...&lt;br /&gt;
* Intrinsic quality measures of visual representations&lt;br /&gt;
* How decision-making is affected by visualization design&lt;br /&gt;
* Simple ways to define and measure the mental model of a visualization&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== David ===&lt;br /&gt;
#get diffusion normals, one library, and one executable distributed&lt;br /&gt;
#good TBI talk in Chicago&lt;br /&gt;
#support 5 phd students&#039; + trevor&#039;s progress&lt;br /&gt;
#si^2 proposal for 6/14, then expedition?&lt;br /&gt;
#keep Vis &#039;11 on track&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Eni ===&lt;br /&gt;
&lt;br /&gt;
* Make progress in the surface analysis/congruency problem &lt;br /&gt;
* Draft a proposal for my work &lt;br /&gt;
* Read more literature that’s related to my work&lt;br /&gt;
* Submit a paper to MRM on tracts &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Jadrian Miles|Jadrian]] ===&lt;br /&gt;
&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal original 2009-06-22.pdf‎|Initial thesis proposal, submitted 2009-06-22]]&lt;br /&gt;
* [[:Image:Jadrian Miles PhD Proposal rev2 2009-10-15.pdf|Proposal second revision, prepared 2009-10-15]]&lt;br /&gt;
&lt;br /&gt;
# Write up dissertation table of contents, outline, and some hole-filled chapters by July 31st&lt;br /&gt;
#* Done.&lt;br /&gt;
# Write up departmental proposal document by September 1st&lt;br /&gt;
#* Initial submission on September 10th.&lt;br /&gt;
# Present the proposal formally to the department in October&lt;br /&gt;
#* Proposal presentation on October 7th.&lt;br /&gt;
# Define a macrostructure model and cost function&lt;br /&gt;
#* The committee has been very involved in refining the idea of the &amp;quot;cost functions&amp;quot; for various fitting steps.  The macrostructure model has gone through several revisions and is almost done.&lt;br /&gt;
# Implement curve clustering using the macrostructure model; write up for ISMRM&lt;br /&gt;
#* {{red|Not done.}}  Intended ISMRM submission on Rician noise correction turned out not to be novel, though I did learn about a ton of related work that has been quite successful.&lt;br /&gt;
# Extend the macrostructure cost function to include DWI values&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Implement macrostructure adjustment from DWIs; write up for an MRM paper&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Get preliminary histological data of some kind, either by doing it in-house, working with collaborators, or outsourcing&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Derive a minimal set of microstructure measurements from histological data&lt;br /&gt;
#* {{red|Not done.}}&lt;br /&gt;
# Copy South Africa data from DVDs and work with Amanda to import and process it&lt;br /&gt;
#* All data are organized on the data server but not processed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Nathan Malkin|Nathan]] ===&lt;br /&gt;
* [[Diffusion Processing Pipeline|Diffusion pipeline]] work&lt;br /&gt;
** Medical Image library&lt;br /&gt;
** Possible further work, including:&lt;br /&gt;
*** Tubegen replacement&lt;br /&gt;
*** New (or updated) dfit utility&lt;br /&gt;
*** Others &lt;br /&gt;
* Independent research&lt;br /&gt;
*# Develop interesting research question&lt;br /&gt;
*#* Related to my interests in human-computer interaction, behavioral economics/cognitive biases, etc.&lt;br /&gt;
*# Come up with a project based on it&lt;br /&gt;
*# Apply for UTRA for this project (deadline: &#039;&#039;&#039;February 9&#039;&#039;&#039;)&lt;br /&gt;
*# Apply for NSF REU (Research Experience for Undergraduates) grant (deadline: &#039;&#039;&#039;February 15&#039;&#039;&#039;)&lt;br /&gt;
*# Begin work on the project&lt;br /&gt;
* Help with data processing for our [[Diffusion MRI#Neuroscience_Collaborations|collaborators]]&lt;br /&gt;
** Including, possibly, new tools that may be required&lt;br /&gt;
* (Ongoing) Provide support and documentation for previously-written tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Radu === &lt;br /&gt;
&lt;br /&gt;
# Learning distance metrics from users (with Cagatay)&lt;br /&gt;
# Finalize(revision)/Resubmit maps paper&lt;br /&gt;
# Analysis &amp;quot;nudge&amp;quot; pilot study&lt;br /&gt;
# Prepare thesis proposal talk&lt;br /&gt;
# Create a daphne-module map&lt;br /&gt;
# Deploy existent maps at Immgen&lt;br /&gt;
# Design collaborative module for maps + instrumentation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Wenjin ===&lt;br /&gt;
&lt;br /&gt;
* Finish journal draft and submit&lt;br /&gt;
* Get at least 1 real brain data on double-PGSE acquisition (work with Ed)&lt;br /&gt;
* MICCAI workshop&lt;br /&gt;
* Get ready for thesis proposal&lt;br /&gt;
* (Maybe) Define additional axon estimation application areas: brain tumor inflammation, Wrist fiber, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== [[User:Steven Gomez|Steve]] ===&lt;br /&gt;
&lt;br /&gt;
Comps progress&lt;br /&gt;
# Receive brain+code dump from Trevor re: interaction histories (meet to discuss before 6/7)&lt;br /&gt;
# Application integration for interaction histories&lt;br /&gt;
#* Get G3D history library from Trevor asap, and implement sample app and some basic interactions w/ history collection by 6/20&lt;br /&gt;
#* BrainApp, Brain circuit vis?&lt;br /&gt;
# LCS algorithm over histories for required interaction patterns&lt;br /&gt;
#* first draft, test on synthetic data by 6/25&lt;br /&gt;
# Algorithm for remaining time estimation from pattern&lt;br /&gt;
#* first pass by 7/1&lt;br /&gt;
&lt;br /&gt;
ImmGen&lt;br /&gt;
# Build Radu&#039;s code and look for contribution to extend tools (6/15) - meet with Radu?&lt;br /&gt;
&lt;br /&gt;
Misc&lt;br /&gt;
# [http://www.eis.mdx.ac.uk/vass VASS] Application/Registration&lt;br /&gt;
#* draft of app materials (essay, cv) by 6/1 (deadline: 7/23)&lt;br /&gt;
# Work on poster submission for vis (deadline: 7/1)&lt;br /&gt;
# Learn more about Caroline&#039;s work before she gets here (read her &amp;quot;Implied Dynamics&amp;quot; and &amp;quot;Visual Metaphors&amp;quot; papers)&lt;br /&gt;
# NSF Grad Fellowship application -- first draft on all parts by 8/30&lt;br /&gt;
#* search other fellowships&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Ryan ===&lt;br /&gt;
&lt;br /&gt;
* Get oriented in lab and on campus&lt;br /&gt;
* Progress with coursework&lt;br /&gt;
* Work on NSF GRFP Application and Research Comps projects&lt;br /&gt;
** Cortical connectivity (vis)&lt;br /&gt;
** Surface registration (vis and modeling)&lt;br /&gt;
** Surface descriptors (algo and applications)&lt;br /&gt;
&lt;br /&gt;
== Past Plans and Goals ==&lt;br /&gt;
* [[/Summer-Fall 2010|Summer-Fall &#039;10]]&lt;br /&gt;
* [[/Spring 2010|Spring &#039;10]]&lt;br /&gt;
* [[/Fall 2009|Fall &#039;09]]&lt;br /&gt;
* [[/Summer 2009|Summer &#039;09]]&lt;br /&gt;
* [[/Spring 2009|Spring &#039;09]]&lt;br /&gt;
* [[/Fall 2008|Fall &#039;08]]&lt;br /&gt;
* [http://sites.google.com/a/vis.cs.brown.edu/collaboravis/Home/summer-08-group-goals Summer &#039;08]&lt;br /&gt;
&lt;br /&gt;
[[Category:VRL]]&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Roi_select&amp;diff=4578</id>
		<title>Roi select</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Roi_select&amp;diff=4578"/>
		<updated>2010-09-26T17:10:43Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Compiling roi_select */ updated with new location of roi_select&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:roi_select}}&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Regions of Interest Selection Tool&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;roi_select&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) is a command-line program that, when provided with a collection of curves, selects and outputs only the subset that passes through a particular region (or regions) of interest.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
After we run DTI data through a pipeline (whether [[Diffusion Processing Pipeline|our in-house one]] or a third-party one, such as [http://www.trackvis.org/dtk/ DTK]), we are left with data for hundreds of thousands of streamlines. Often times, however, we are interested in only a small subset of that data.  We may be working with a specific structure in the brain and would like to see a visualization of only that structure. Or we may be interested in a particular region and would like to compute statistics about the streamlines that pass through that region. Perhaps we want to see how many streamlines connect two distinct regions and what paths they take. The &amp;lt;tt&amp;gt;roi_select&amp;lt;/tt&amp;gt; tool can help achieve these goals.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Installation=&lt;br /&gt;
==Dependencies==&lt;br /&gt;
&amp;lt;tt&amp;gt;roi_select&amp;lt;/tt&amp;gt; has two dependencies, both of which are located under [[$G]]:&lt;br /&gt;
# &amp;lt;tt&amp;gt;[[libcurvecollection]]&amp;lt;/tt&amp;gt; &amp;amp;mdash; for format interoperability, file I/O, and in-memory data handling&lt;br /&gt;
# &amp;lt;tt&amp;gt;gg/args&amp;lt;/tt&amp;gt; &amp;amp;mdash; for command-line argument processing&lt;br /&gt;
&lt;br /&gt;
You will first need to [[Quick Start for CIT Users|set up your sandbox]]. (You may be able to [[Automated Quick Start|do it automatically]].)&lt;br /&gt;
&lt;br /&gt;
To install &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd $G/common/libcurvecollection&lt;br /&gt;
make all&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To install &amp;lt;tt&amp;gt;gg_args&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd $G/common/utility&lt;br /&gt;
make all&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
cd $G/common/gg&lt;br /&gt;
make all&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;(see [[Check out projects|more information]])&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Compiling roi_select==&lt;br /&gt;
To compile &amp;lt;tt&amp;gt;roi_select&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd $G/project/brain/pipeline/streamline_processing/roi_select&lt;br /&gt;
make all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will create a subdirectory &amp;lt;tt&amp;gt;obj&amp;lt;/tt&amp;gt; with executables for each of the compilers supported under $G. Unless you have a compelling reason against doing so, your best bet is probably to stick with the latest version (right now, GCC4). Therefore, to execute, run &amp;lt;code&amp;gt;./obj/roi_select-gcc4&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Installing roi_select as a library==&lt;br /&gt;
If you would like to expand on the functionality provided by &amp;lt;tt&amp;gt;roi_select&amp;lt;/tt&amp;gt;, you can install it as a library. After compiling the source (see above), run the command &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;. You will then be able to use it by including &amp;lt;code&amp;gt;#include &amp;lt;brain/roi_select.h&amp;gt;&amp;lt;/code&amp;gt; (and other header files) in your source.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Using roi_select=&lt;br /&gt;
After installing the dependencies and compiling the code, you can run &amp;lt;tt&amp;gt;roi_select&amp;lt;/tt&amp;gt; by executing &amp;lt;code&amp;gt;$G/project/brain/roi/select/roi_select-gcc4&amp;lt;/code&amp;gt; with the following command-line arguments:&lt;br /&gt;
&lt;br /&gt;
==Required arguments==&lt;br /&gt;
{| {{Prettytable}}&lt;br /&gt;
|-&lt;br /&gt;
! {{Hl2}} | Flag&lt;br /&gt;
! {{Hl2}} | Arguments&lt;br /&gt;
! {{Hl2}} | Explanation&lt;br /&gt;
|-&lt;br /&gt;
| -i || filename [string] || &lt;br /&gt;
The source file that contains the tracks that are to be selected. &lt;br /&gt;
* Supported formats are .trk, .data, and .ccf. &lt;br /&gt;
* This can be the absolute path to the file.&lt;br /&gt;
|-&lt;br /&gt;
| -o || filename [string] || &lt;br /&gt;
The output file, where the selected streamlines are to be written. &lt;br /&gt;
* Supported formats are .trk, .data, and .ccf. &lt;br /&gt;
|-&lt;br /&gt;
| -e || expression [string] || The ROI expression &amp;amp;mdash; [[#The_ROI_expression|see explanation below]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==At least one of these arguments is required==&lt;br /&gt;
{| {{Prettytable}}&lt;br /&gt;
|-&lt;br /&gt;
! {{Hl2}} | Flag&lt;br /&gt;
! {{Hl2}} | Arguments&lt;br /&gt;
! {{Hl2}} | Explanation&lt;br /&gt;
|-&lt;br /&gt;
| -roi_aa || &lt;br /&gt;
name [string] &amp;lt;br /&amp;gt;&lt;br /&gt;
x1 [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
y1 [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
z1 [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
x2 [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
y2 [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
z2 [double] &lt;br /&gt;
|| &lt;br /&gt;
The first argument is the name by which you refer to this ROI in the expression. &amp;lt;br /&amp;gt;&lt;br /&gt;
The remainder of the arguments are the (x,y,z) coordinates that define this &#039;&#039;&#039;[[#Axis-Aligned ROI|Axis-Aligned ROI]]&#039;&#039;&#039;.&lt;br /&gt;
|-&lt;br /&gt;
| -roi_nifti || &lt;br /&gt;
name [string] &amp;lt;br /&amp;gt;&lt;br /&gt;
filename [string]  &amp;lt;br /&amp;gt;&lt;br /&gt;
region [int]&lt;br /&gt;
|| &lt;br /&gt;
The first argument is the name by which you refer to this ROI in the expression. &amp;lt;br /&amp;gt;&lt;br /&gt;
The rest are the filename and the region (bitmap number) that define this &#039;&#039;&#039;[[#NIfTI Bitmap ROI|NIfTI Bitmap ROI]].&lt;br /&gt;
|-&lt;br /&gt;
| -roi_sphere|| &lt;br /&gt;
name [string] &amp;lt;br /&amp;gt;&lt;br /&gt;
x [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
y [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
z [double]  &amp;lt;br /&amp;gt;&lt;br /&gt;
radius [double] &lt;br /&gt;
|| &lt;br /&gt;
The first argument is the name by which you refer to this ROI in the expression. &amp;lt;br /&amp;gt;&lt;br /&gt;
It is followed by the (x,y,z) coordinates of the sphere center and the radius of this &#039;&#039;&#039;[[#Spherical ROI|Spherical ROI]]&#039;&#039;&#039;.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==ROI types==&lt;br /&gt;
===Axis-Aligned ROI===&lt;br /&gt;
You can think of an &#039;&#039;&#039;Axis-Aligned ROI&#039;&#039;&#039; as a box placed within the image volume &amp;amp;mdash; that is axis-aligned, of course. A streamline is said to pass through an Axis-Aligned ROI when any of the line&#039;s vertices is located within the bounds of this box or when a line segment connecting any two adjacent vertices passes through this box.&lt;br /&gt;
&lt;br /&gt;
The box that is an Axis-Aligned ROI is defined by two corners: the front bottom left and the back top right. These are labeled as corners 1 and 2 in the following diagram:&lt;br /&gt;
&lt;br /&gt;
[[Image:Cube.png]]&lt;br /&gt;
&lt;br /&gt;
Practically speaking, we check that x&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;&amp;amp;lt;x&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;, y&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;&amp;amp;lt;y&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;, and z&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;&amp;amp;lt;z&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The coordinates of the vertices are in millimeters and must be in the same coordinate system (have the same origin) as the streamline vertices in the input file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===NIfTI Bitmap ROI===&lt;br /&gt;
&#039;&#039;&#039;NIfTI Bitmap ROIs&#039;&#039;&#039; are defined in [[NIfTI]] files. They use only three of the available dimensions and, for every voxel, store a 0 or a positive integer. A voxel with a zero value is not in any ROI. A voxel with value 1 is in ROI 1, value 2 means ROI 2, and so on. Therefore, a definition of a NIfTI Bitmap ROI consists of a filename and an integer &amp;amp;mdash; the particular ROI you are interested. &lt;br /&gt;
&lt;br /&gt;
If you want to select multiple regions from a single file, you will have to define them as separate ROIs. However, if you want the program to use &#039;&#039;&#039;all&#039;&#039;&#039; the ROIs in a given file (an &#039;&#039;or&#039;&#039; of all regions), you can just specify the ROI number as &amp;lt;tt&amp;gt;-1&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Spherical ROI===&lt;br /&gt;
A &#039;&#039;&#039;Spherical ROI&#039;&#039;&#039; is simply a sphere in the image volume, defined by the (x,y,z) coordinates of its center and a radius; all of these values are in millimeters.&lt;br /&gt;
&lt;br /&gt;
A streamline is said to pass through the ROI if any of its vertices is within the sphere, or if the straight line segment connecting any two vertices passes through the sphere.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The ROI expression==&lt;br /&gt;
After you have defined the ROIs using the &#039;&#039;-roi_aa&#039;&#039;, &#039;&#039;-roi_nifti&#039;&#039;, or &#039;&#039;-roi_sphere&#039;&#039; arguments, you tell the tool exactly which ROIs to check &amp;amp;mdash; and how &amp;amp;mdash; using the expression string (&#039;&#039;-e&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
The expression string can be the name of just one ROI, or an arbitrarily complex boolean algebra expression using different kinds of ROIs, parentheses, and assorted NOTs, ANDs, ORs, and XORs.&lt;br /&gt;
&lt;br /&gt;
{{infobox|&#039;&#039;&#039;Note:&#039;&#039;&#039; any expression more complex than a single ROI will necessarily require spaces. For the program to catch the string as a whole, you will need to enclose it in quotes. &amp;lt;br /&amp;gt;&lt;br /&gt;
For example: &amp;lt;code&amp;gt;... -e &amp;quot;a AND b&amp;quot;&amp;lt;/code&amp;gt; is good, &amp;lt;code&amp;gt;... -e a AND b&amp;lt;/code&amp;gt; is bad }}&lt;br /&gt;
&lt;br /&gt;
The supported operators are:&lt;br /&gt;
{| {{Prettytable|width=70%}}&lt;br /&gt;
|-&lt;br /&gt;
! {{Hl2}} | Operator&lt;br /&gt;
! {{Hl2}} | Alternate syntax&lt;br /&gt;
! {{Hl2}} | Selects&lt;br /&gt;
|-&lt;br /&gt;
| not || NOT, ! || Lines that do &#039;&#039;not&#039;&#039; pass through the specified regions&lt;br /&gt;
|-&lt;br /&gt;
| and || AND, &amp;amp;, &amp;amp;&amp;amp; || Lines that pass through both specified regions&lt;br /&gt;
|-&lt;br /&gt;
| or  || &lt;br /&gt;
OR, |, &amp;amp;#124;&amp;amp;#124; &lt;br /&gt;
|| Lines that pass through at least one of the specified regions&lt;br /&gt;
|-&lt;br /&gt;
| xor || XOR, ^ || Lines that pass through only one of the specified regions (not both)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{{infobox|&#039;&#039;&#039;Avoid errors:&#039;&#039;&#039; operators, even the symbolic ones, need to be separated from ROI names by spaces. &amp;lt;br /&amp;gt;&lt;br /&gt;
For example: &amp;lt;code&amp;gt;... -e &amp;quot;a&amp;amp;b&amp;quot;&amp;lt;/code&amp;gt; is invalid syntax; use &amp;lt;code&amp;gt;... -e &amp;quot;a &amp;amp; b&amp;quot;&amp;lt;/code&amp;gt; instead}}&lt;br /&gt;
&lt;br /&gt;
=Usage examples=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;./roi_select-gcc4 -i input_file.trk -o output_file.trk -roi_aa myAAroi 0 0 0 123.456 12.345 1.234 -e myAAroi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Selects all the lines that pass through the region between (0,0,0) and (123.456, 12.345, 1.234).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;./roi_select-gcc4 -i input_file.trk -o output_file.trk -roi_aa myAAroi 0 0 0 123.456 12.345 1.234 -roi_nifti myNIfTIroi roi.nii.gz 1 -e &amp;quot;myAAroi OR myNIfTIroi&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Selects all the lines that pass through the region between (0,0,0) and (123.456, 12.345, 1.234) &#039;&#039;or&#039;&#039; through ROI 1 defined in roi.nii.gz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;./roi_select-gcc4 -i input_file.trk -o output_file.trk -roi_aa myAAroi 0 0 0 123.456 12.345 1.234 -roi_nifti myNIfTIroi roi.nii.gz 1 -roi_nifti anotherNIfTIroi roi2.nii.gz 3 -e &amp;quot;anotherNIfTIroi AND NOT (myAAroi OR myNIfTIroi)&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Selects all the lines that pass through ROI 3 defined in roi2.nii.gz &#039;&#039;but not&#039;&#039; through either of the regions in the previous example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;./roi_select-gcc4 -i input_file.trk -o output_file.trk -roi_nifti roi1 roi.nii.gz 1 -roi_nifti roi2 roi.nii.gz 2 -roi_nifti roi3 roi.nii.gz 3 -e &amp;quot;roi1 | roi2 | roi3&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Selects the lines that pass through any of ROIs 1, 2, or 3, all of which are defined in the same file &amp;amp;mdash; roi.nii.gz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;./roi_select-gcc4 -i input_file.trk -o output_file.trk -roi_nifti rois roi.nii.gz -1 -e rois&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Selects all lines that pass through any of the ROIs defined in roi.nii.gz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;... -e &amp;quot;( a &amp;amp; !(b) ) XOR ( (c OR NOT (d AND NOT e) OR (((b)) &amp;amp;&amp;amp; c)) )&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can get pretty wild with your ROI expression, as long as you&#039;re using the proper syntax and all parentheses match.&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4569</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4569"/>
		<updated>2010-09-16T21:05:52Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* CurveCollection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;Curve*&amp;gt; const &amp;amp; getCurves() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a reference to the vector of pointers to the curves that comprise this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overloaded random access operators for CurveCollection return a reference to the curve at the specified index.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage examples: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve&amp;amp; firstCurve = aCurveCollection[0];&amp;lt;br /&amp;gt;Curve&amp;amp; lastCurve = aCurveCollection[aCurveCollection.size() - 1];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineCurveProperty(std::string const &amp;amp;, std::vector&amp;lt;double&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the curves as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;n&#039;&#039;th property from the given vector. If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getCurvePropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineVertexProperty(std::string const &amp;amp;, std::vector&amp;lt;std::vector&amp;lt;double&amp;gt; &amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the vertices as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;k&#039;&#039;th vertex in the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;k&#039;&#039;th property from the &#039;&#039;n&#039;&#039;th vector.  If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getVertexPropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of curves in this collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4568</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4568"/>
		<updated>2010-09-16T20:59:45Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Accessors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;Curve*&amp;gt; const &amp;amp; getCurves() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a reference to the vector of pointers to the curves that comprise this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overloaded random access operators for CurveCollection return a reference to the curve at the specified index.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage examples: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve&amp;amp; firstCurve = aCurveCollection[0];&amp;lt;br /&amp;gt;Curve&amp;amp; lastCurve = aCurveCollection[aCurveCollection.size() - 1];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineCurveProperty(std::string const &amp;amp;, std::vector&amp;lt;double&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the curves as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;n&#039;&#039;th property from the given vector. If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getCurvePropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of curves in this collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4567</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4567"/>
		<updated>2010-09-16T20:57:07Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Accessors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;Curve*&amp;gt; const &amp;amp; getCurves() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a reference to the vector of pointers to the curves that comprise this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overloaded random access operators for CurveCollection return a reference to the curve at the specified index.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;Curve&amp;amp; firstCurve = aCurveCollection[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineCurveProperty(std::string const &amp;amp;, std::vector&amp;lt;double&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the curves as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;n&#039;&#039;th property from the given vector. If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getCurvePropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of curves in this collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4561</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4561"/>
		<updated>2010-09-15T15:02:01Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* CurveCollection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;Curve*&amp;gt; const &amp;amp; getCurves() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a reference to the vector of pointers to the curves that comprise this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overloaded random access operators for CurveCollection return a reference to the curve at the specified index.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;Curve&amp;amp; firstVertex = aCurveCollection[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void defineCurveProperty(std::string const &amp;amp;, std::vector&amp;lt;double&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assigns the given double values to the curves as properties. &lt;br /&gt;
&lt;br /&gt;
The properties are assigned in order. That is, the &#039;&#039;n&#039;&#039;th curve in the CurveCollection will get the &#039;&#039;n&#039;&#039;th property from the given vector. If the number of curves and the number of properties given do not match, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; error is thrown.&lt;br /&gt;
&lt;br /&gt;
The curves can be retrieved using the string name they were saved with, or using their Property ID (found using [[#Accessing_properties_and_information_about_them|CurveCollection::getCurvePropertyID]]) . Note that property name strings are converted to lowercase before being saved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of curves in this collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4557</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4557"/>
		<updated>2010-09-13T22:05:15Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Populating a CurveCollection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4556</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4556"/>
		<updated>2010-09-13T22:05:02Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Populating a CurveCollection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4555</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4555"/>
		<updated>2010-09-13T22:04:32Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* CurveCollection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(CurveCollection const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  Will produce a curve collection identical to the given one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructs a blank curve collection, much like you&#039;d expect it to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~CurveCollection()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clears and deletes everything in the CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addCurve(Curve const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given curve to this CurveCollection.&lt;br /&gt;
&lt;br /&gt;
For this to happen, the properties of the given curve match those of this CurveCollection. Otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
However, there are two &#039;&#039;&#039;exceptions&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;Special case 1&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If the CurveCollection is empty, it will take on the properties (if any) of the given curve. That is, if a curve &#039;&#039;C&#039;&#039; with properties (or vertex properties) is added to a blank CurveCollection, no error is thrown; instead, the CurveCollection &amp;quot;adopts&amp;quot; &#039;&#039;C&#039;&#039;&#039;s properties, and the properties of any subsequent curves will be required to match those of &#039;&#039;C&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Special case 2&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
If a CurveCollection has properties, but the given curve has none, the given curve will be assigned zeros (0.0) in place of all the properties it does not have.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4554</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4554"/>
		<updated>2010-09-13T21:47:22Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* CurveCollection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveCollection(std::vector&amp;lt;Curve&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The constructors take vectors of curves (or pointers to them) as parameters and construct a new CurveCollection using copies of those curves. &lt;br /&gt;
&lt;br /&gt;
If the curves contain any curve and/or vertex properties, they will be preserved.  However, for this to happen, the curves and vertex property names must match (including same order) across all curves. If this is not the case, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown, and the constructor exits.&lt;br /&gt;
&lt;br /&gt;
The constructor will also do its best to preserve the dimensions and voxel size of the source image.  In cases of conflicting dimensions and voxel sizes, the constructor will use smallest voxel size among the candidates and the dimensions that went with that value.  Note that these values are only meaningful when writing to (or reading from) TrackVis (&#039;&#039;.trk&#039;&#039;) files (see note at &amp;lt;tt&amp;gt;CurveCollection::writeTrackVisFile&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4553</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4553"/>
		<updated>2010-09-13T21:13:33Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Curve */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Destructor===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The destructor will delete all vertices that constituted this curve and all properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex*&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with pointers to all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the Curve is not in any CurveCollection. Throws &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors and mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_and_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a &amp;lt;tt&amp;gt;domain_error&amp;lt;/tt&amp;gt; is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator by pointing to the first and last CurveVertex in this Curve (respectively).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: the iterator points to a &#039;&#039;pointer&#039;&#039; to a CurveVertex. To reiterate: if you dereference the iterator, you get the pointer to a CurveVertex. (This design choice is motivated by the internal representation of a Curve.) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const * currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning&#039;&#039;&#039;: a Curve has a &amp;lt;tt&amp;gt;const_iterator&amp;lt;/tt&amp;gt; but no &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt;. If you use an &amp;lt;tt&amp;gt;iterator&amp;lt;/tt&amp;gt; in your code, you will have problems!&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4552</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4552"/>
		<updated>2010-09-13T20:50:36Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: /* Property accessors and mutators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
All curve and vertex property names in &amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; are case-insensitive. However, they are processed and stored in their lower-case form.&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp; v)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an invalid_argument exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int index) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int index)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws a invalid_argument exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws runtime_error if the Curve is not in any CurveCollection. Throws invalid_argument exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors &amp;amp; mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp; name)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_.26_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp; other) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a domain_error is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const &amp;amp; currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4551</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4551"/>
		<updated>2010-09-13T20:43:04Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors and mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp;, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
Curves are built on the assumption of piecewise linearity. That is, the model assumes that adjacent vertices are linearly connected.  (This is used, for example, by the length() function.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex*&amp;gt; const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with a copy of the given vertices and no curve properties.&lt;br /&gt;
If the given vertices had any properties, they will not appear in the new Curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will have no properties and will not be in any CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor: the new Curve will have no vertices and no properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp; v)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an invalid_argument exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int index) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int index)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws a invalid_argument exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws runtime_error if the Curve is not in any CurveCollection. Throws invalid_argument exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors &amp;amp; mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp; name)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_.26_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp; other) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a domain_error is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const &amp;amp; currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4550</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4550"/>
		<updated>2010-09-13T20:32:21Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors &amp;amp; mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given property name was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other property functions===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp; v, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp; v) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp; p_vertices);&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with given vertices and no curve properties.&lt;br /&gt;
Warning: if the given vertices have any properties, they will be cleared.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will not be in any CurveCollection.  Thus, while any curve and vertex properties will be copied over, they may only be accessed using the &amp;lt;code&amp;gt;property(int)&amp;lt;/code&amp;gt; accessor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;This behavior (keeping around properties) introduces a bug: when you copy-construct a curve, then add it to a curve collection, then define property &amp;quot;new_property&amp;quot; for it, calling getProperty(&amp;quot;new_property&amp;quot;) will return one of the old properties &amp;amp;mdash; not the newly added one. Possible solution: clear properties in CurveCollection constructors. (That&#039;s what the Curve constructor currently does, avoiding the same problem with the CurveVertex copy constructor.)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp; v)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an invalid_argument exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int index) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int index)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws a invalid_argument exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws runtime_error if the Curve is not in any CurveCollection. Throws invalid_argument exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors &amp;amp; mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp; name)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_.26_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp; other) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a domain_error is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const &amp;amp; currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4549</id>
		<title>Libcurvecollection</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=Libcurvecollection&amp;diff=4549"/>
		<updated>2010-09-13T20:27:29Z</updated>

		<summary type="html">&lt;p&gt;Nathan Malkin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;{{infobox|This page is incomplete and currently under development.}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Curve Collection Library&#039;&#039;&#039; (&#039;&#039;&#039;&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt;&#039;&#039;&#039;) provides a representation of streamlines (&amp;quot;curves&amp;quot;) in memory and an interface to access this information in a convenient (but memory-safe!) manner.  It can also read and write to disk, freely converting between several file formats.&lt;br /&gt;
&lt;br /&gt;
Currently, the supported formats are:&lt;br /&gt;
* [http://www.trackvis.org/docs/?subsect=fileformat DTK/TrackVis]&lt;br /&gt;
* Tubegen&#039;s custom format (&amp;lt;tt&amp;gt;.data&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.nocr&amp;lt;/tt&amp;gt; files)&lt;br /&gt;
* [[#CCF|CCF]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&amp;lt;!-- From: Job_Jar/Streamline_library --&amp;gt;&lt;br /&gt;
[[Tubegen]] represents streamlines/tubes in several different formats in memory and on disk, which are distinct from the representations used by well-established [[3rd Party Diffusion MRI Software|third parties]], for example [http://www.fmrib.ox.ac.uk/fsl/ FSL], [http://www.cs.ucl.ac.uk/research/medic/camino/ Camino], and [http://www.trackvis.org/ DTK].  The core of any representation, though, is quite simple: a collection of streamlines, in which each streamline is just an ordered list of vertex points in 3-space.  The tricky bit is that we want to be able to associate scalars with these datasets at various levels:&lt;br /&gt;
&amp;lt;!--* the whole collection (for example, the average length of the streamlines)--&amp;gt;&lt;br /&gt;
* each individual streamline (&#039;&#039;ex: the length of the streamline&#039;&#039;)&lt;br /&gt;
&amp;lt;!--* each segment of each streamline (&#039;&#039;ex: distance to the nearest segment; color values related to the orientation of the segment&#039;&#039;)--&amp;gt;&lt;br /&gt;
* each vertex point of each streamline (&#039;&#039;ex: the interpolated FA value at that point&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Some of the data formats also include a description (i.e., a string) for each scalar value.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;libcurvecollection&amp;lt;/tt&amp;gt; provides a unified interface for handling these various formats, while simplifying access to the information in memory by presenting it in an object-oriented fashion.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
The Curve Collection library can be found under &amp;lt;tt&amp;gt;[[$G]]/common/libcurvecollection/&amp;lt;/tt&amp;gt;. To compile and install it, run &amp;lt;code&amp;gt;make all&amp;lt;/code&amp;gt;, then &amp;lt;code&amp;gt;make install&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To use the library after it has been installed, include the following lines: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curveCollection.h&amp;gt;&lt;br /&gt;
#include &amp;lt;brain/curve.h&amp;gt; &lt;br /&gt;
#include &amp;lt;brain/curveVertex.h&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt; near the top of your file.&lt;br /&gt;
&lt;br /&gt;
==Structure==&lt;br /&gt;
The library consists of three mutually dependent components:&lt;br /&gt;
#[[#CurveVertex|CurveVertex]]&lt;br /&gt;
#[[#Curve|Curve]]&lt;br /&gt;
#[[#CurveCollection|CurveCollection]]&lt;br /&gt;
If you look at the library source code, each component has a header (&amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt;) and a &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; file associated with it.&lt;br /&gt;
&lt;br /&gt;
==CurveVertex==&lt;br /&gt;
A &#039;&#039;&#039;CurveVertex&#039;&#039;&#039; represents a point in 3D space. It holds the point&#039;s coordinates and any metadata (in the form of doubles) that the point may have.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(double, double, double)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main constructor takes three doubles: the x, y, and z coordinates of the point.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Copy constructors&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const &amp;amp;)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex(CurveVertex const *)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A vertex instantiated using a copy constructor will have the same coordinates as the original, but won&#039;t have any properties and will not belong to any Curve (or CurveCollection). Hence, a &#039;&#039;&#039;warning&#039;&#039;&#039;: calling functions like &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;getPropertyNames&amp;lt;/tt&amp;gt; on a copy-constructed curve will result in a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&amp;lt;code&amp;gt;double x() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double y() const&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double z() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Access the x, y, or z coordinate of the vertex, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Property accessors &amp;amp; mutators===&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp;) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the vertex property name.&lt;br /&gt;
This function looks up the given property name and returns a reference to the vertex property (in the current CurveVertex) that it identifies. Because the return value is a reference, you can modify the stored property value (assuming your reference to the object is non-&amp;lt;tt&amp;gt;const&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Note that the Curve Collection Library stores property names only at the highest level (i.e., in a CurveCollection). Therefore, the CurveVertex in question must be in a Curve, and that Curve must be in a CurveCollection before you can call this function. If you call it before this has happened, the program will exit with a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; (&#039;&#039;CurveVertex::checkForContainer: ...&#039;&#039;).&lt;br /&gt;
&amp;lt;ref&amp;gt;libcurvecollection uses exceptions built into the C++ Standard Library ([http://www.cplusplus.com/reference/std/stdexcept/ stdexcept]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the lookup fails (i.e., the given string &#039;&#039;name&#039;&#039; was not found in the CurveCollection), the function will throw an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int)&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameter of this function is the integer &#039;&#039;Property ID&#039;&#039;.&lt;br /&gt;
Returns a reference the vertex property identified by the Property ID. Throws an &amp;lt;tt&amp;gt;invalid_argument&amp;lt;/tt&amp;gt; exception if the given ID is out of bounds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;What is a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
A Property ID is the index of the position in the vector where a property is stored. You can look it up using the &amp;lt;tt&amp;gt;CurveCollection::getVertexPropertyID&amp;lt;/tt&amp;gt; function.&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Why use a Property ID?&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Those wishing to call getProperty from within a loop may want to avoid&lt;br /&gt;
the efficiency loss associated with looking up the name at each call.&lt;br /&gt;
Instead, they can perform the lookup once using&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#Vertex_properties_2|curveCollection::getVertexPropertyID]]&amp;lt;/tt&amp;gt;, and pass the resulting int ID&lt;br /&gt;
to &amp;lt;tt&amp;gt;property&amp;lt;/tt&amp;gt; inside their loop.&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other property function===&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of properties this vertex has.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;std::string&amp;gt; const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector of all the property names of this CurveVertex (in the order in which they were set). Throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection. Returns an empty vector if this CurveVertex is in a CurveCollection, but no properties have been set.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TODO: rethink this?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true if this vertex has the property identified by the given name. (Specifically, checks that the CurveCollection knows this property name.) Hence, throws a &amp;lt;tt&amp;gt;runtime_error&amp;lt;/tt&amp;gt; if the CurveVertex is not in a Curve, or if that Curve is not in a CurveCollection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool equals(CurveVertex const &amp;amp; v, double EPSILON = 1e-6) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compares the positions of two vertices in 3D space, returning &#039;&#039;true&#039;&#039; if each pair of coordinates (x, y, z) is within EPSILON of each other. Does not check for equality of properties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double distance(CurveVertex const &amp;amp; v) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Computes the distance between two vertices in 3D space.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Curve==&lt;br /&gt;
A &#039;&#039;&#039;Curve&#039;&#039;&#039; represents a track or streamline in 3D space, and consists of an ordered sequence of &amp;lt;span title=&amp;quot;CurveVertex&amp;quot;&amp;gt;vertices&amp;lt;/span&amp;gt; and any metadata (in the form of doubles) that is common to them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve([http://www.sgi.com/tech/stl/Vector.html std::vector]&amp;lt;CurveVertex&amp;gt; const &amp;amp; p_vertices);&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initializes Curve with given vertices and no curve properties.&lt;br /&gt;
Warning: if the given vertices have any properties, they will be cleared.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy constructor.  The new Curve will not be in any CurveCollection.  Thus, while any curve and vertex properties will be copied over, they may only be accessed using the &amp;lt;code&amp;gt;property(int)&amp;lt;/code&amp;gt; accessor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;This behavior (keeping around properties) introduces a bug: when you copy-construct a curve, then add it to a curve collection, then define property &amp;quot;new_property&amp;quot; for it, calling getProperty(&amp;quot;new_property&amp;quot;) will return one of the old properties &amp;amp;mdash; not the newly added one. Possible solution: clear properties in CurveCollection constructors. (That&#039;s what the Curve constructor currently does, avoiding the same problem with the CurveVertex copy constructor.)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empty constructor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a curve===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;void addVertex(CurveVertex const &amp;amp; v)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Appends the given CurveVertex to the end of the current Curve. Since you are adding to an already-existing curve, the properties of the new vertex must match&amp;lt;ref&amp;gt;Whenever the library wants to check that the properties of two curves match, it calls the (public) &amp;lt;tt&amp;gt;Curve::propertiesMatch&amp;lt;/tt&amp;gt; function, which checks that the curve and vertex properties are identical in their quantity, names, and ordering.&amp;lt;/ref&amp;gt; the properties of the vertices that are already in the curve. In the event of a mismatch, an invalid_argument exception is thrown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There is one exception:&#039;&#039;&#039; if you are adding a vertex with &#039;&#039;no properties&#039;&#039; to a curve that &#039;&#039;does&#039;&#039; have properties, the vertex will be added to the curve (and no exception will be thrown), but the vertex will get zeros (0.0) for each of the properties.&lt;br /&gt;
&lt;br /&gt;
Note: this function checks for matching properties on every call, so &amp;amp;mdash; depending on your situation &amp;amp;mdash; a more efficient way to populate a curve may be to store a [http://www.sgi.com/tech/stl/Vector.html vector] of vertices and pass them to the [[#Constructors_2|constructor]]. (But remember to store the vertex properties separately, or they will be erased!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;CurveVertex&amp;gt; const &amp;amp; getVertices() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with all of the vertices this Curve contains.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex const &amp;amp; operator[](int index) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;CurveVertex &amp;amp; operator[](int index)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the CurveVertex at the given index in the curve. Throws a invalid_argument exception if the index is out of bounds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;code&amp;gt;CurveVertex&amp;amp; firstVertex = someCurve[0];&amp;lt;/code&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing vertex properties===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::vector&amp;lt;double&amp;gt; getVertexProperty(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns a vector with the identified property value from each of this Curve&#039;s vertices. Throws runtime_error if the Curve is not in any CurveCollection. Throws invalid_argument exception if the given string was not found among the vertex property names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
If &amp;lt;tt&amp;gt;Curve someCurve&amp;lt;/tt&amp;gt; consists of three vertices &amp;lt;tt&amp;gt;aVertex, anotherVertex&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;aThirdVertex&amp;lt;/tt&amp;gt;, each of which has a property &amp;lt;tt&amp;gt;propertyX&amp;lt;/tt&amp;gt; with values &amp;lt;tt&amp;gt;10, 20, &amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;30&amp;lt;/tt&amp;gt;, respectively, &amp;lt;tt&amp;gt;someCurve.getVertexProperty(&amp;quot;propertyX&amp;quot;)&amp;lt;/tt&amp;gt; will return the vector &amp;lt;tt&amp;gt;[10, 20, 30]&amp;lt;/tt&amp;gt;.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Curve property accessors &amp;amp; mutators===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(std::string const &amp;amp; name)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(std::string const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double &amp;amp; property(int id)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double const &amp;amp; property(int id) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Property_accessors_.26_mutators|CurveVertex property accessor functions]] &amp;amp;mdash; these work just like they do.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve property functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int propertyCount() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;std::string const &amp;amp; getPropertyNames() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool hasProperty([http://www.sgi.com/tech/stl/basic_string.html std::string] const &amp;amp; name) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the documentation for the [[#Other_property_functions|CurveVertex property functions]] &amp;amp;mdash; their behavior is identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;bool propertiesMatch(Curve const &amp;amp; other) const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns true when the curve and vertex properties in the two curves are identical in quantity, name, and order.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Other curve functions===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;int size() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the number of vertices in this curve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;double length() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual length of the curve (sum of vertex-to-vertex distances).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Curve &amp;amp; operator=(Curve const &amp;amp; c)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assignment operator: replaces this curve&#039;s vertices and properties with those of the given curve. The properties of the current curve and new curves must match; otherwise, a domain_error is thrown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Iterator===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator begin() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;const_iterator end() const&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Works like your typical iterator.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Usage example: &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Curve myCurve; // already populated&lt;br /&gt;
for(Curve::const_iterator vertexIterator = myCurve.begin(); &lt;br /&gt;
    vertexIterator != myCurve.end(); &lt;br /&gt;
    ++vertexIterator) &lt;br /&gt;
{&lt;br /&gt;
    CurveVertex const &amp;amp; currentVertex = *cc_it;&lt;br /&gt;
&lt;br /&gt;
    // do something with currentVertex&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==CurveCollection==&lt;br /&gt;
&lt;br /&gt;
An instance of &#039;&#039;&#039;CurveCollection&#039;&#039;&#039; holds an arbitrary number of Curve objects and information about the Curve and CurveVertex properties (name/description strings). It also provides functions for the import and export of this data.&lt;br /&gt;
&lt;br /&gt;
===A note on properties===&lt;br /&gt;
Every CurveVertex stores its own (vertex) properties. Likewise, every Curve stores its own curve properties. However, all curves and vertices in a CurveCollection must have the same properties. In fact, the only way to set properties is through functions that are located in CurveCollection.  &lt;br /&gt;
&lt;br /&gt;
Why is this the case?  For metadata such as curve and vertex properties to be meaningful, there must be some string associated with them: a name (or a description). But storing a string with every data point in a collection is cost-prohibitive (in memory consumed). Since data in most collections shares the same properties, we have chosen to store the property names at the top level &amp;amp;mdash; in a CurveCollection.  Enforcing this invariant is why all properties must be set at the same time and why a Curve and CurveVertex cannot have properties when they are not in a CurveCollection &amp;amp;mdash; without it, there simply isn&#039;t a way to look up the properties to access them.&lt;br /&gt;
&lt;br /&gt;
===Constructors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Populating a CurveCollection===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessors===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Setting properties===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Accessing properties and information about them===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Clearing property===&lt;br /&gt;
&lt;br /&gt;
====Curve properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Vertex properties====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===I/O functions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CCF==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;/div&gt;</summary>
		<author><name>Nathan Malkin</name></author>
	</entry>
</feed>