Nascent Papers: Difference between revisions
Steven Gomez (talk | contribs) |
Steven Gomez (talk | contribs) |
||
| Line 24: | Line 24: | ||
Abstract: ''TBD'' | Abstract: ''TBD'' | ||
'''Hypotheses on my mind''' | |||
# Analysts spend more time w/attractive analytics than ugly ones | |||
# Analysts feel more confident in the analyses they produce when using attractive/conforming ones versus ugly/unconventional ones | |||
# [http://en.wikipedia.org/wiki/Technology_acceptance_model Technology acceptance model (TAM)] extends to visualization / visual analytics | |||
## Is trust a factor? Is that part of perceived usefulness or perceived ease of use? | |||
# More saccades with more uncertain data | |||
# (Does the analyst spend more mental effort trying to build a narrative of what’s going on in the data when s/he is aware of uncertainty in the vis?) | |||
# Viewing other analyses or past sessions from a visual tool will improve confidence in my own analysis or in the tool. (Not just the effect of training.) | |||
## How do we communicate interesting findings in visualizations? | |||
## If users took screenshots of interesting/persuasive views, can we learn what makes a good expository view? | |||
## Check out CHI ‘11 paper - “The Impact of Social Information on Visual Judgments” | |||
# We can make a useful dataset from images on wiki pages (scrape them) and learn something about how people do expository/explanatory visualization/narrative construction | |||
# We can evaluate what makes a "high rated" visualization on ManyEyes. | |||
## It’s easy to visually tell the difference between the “high rated” sorted list of visualizations there, and just the “most current”. I’m not sure the topic of the visualization is even that critical -- I could probably classify to a first approximation just by looking at the title syntax and the distribution of the visualization data. | |||
== Eni == | == Eni == | ||
Revision as of 17:30, 24 May 2011
Steve
[updated 5/24/11]
Tentative title: Modeling the visual attention 'spotlight' by low-level image features for vector and tensor field visualizations
Expected Contribution(s):
- A method for learning 2D ROIs (bounding boxes) in the image plane that are interesting to users. We are learning the low-level properties (thinking Gist descriptors, textons) of interesting ROIs by having users manually select "interesting" ones in training, given the domain and analysis task, and by capturing the ROIs that users attend to using eye-tracking. The assumption is that these interesting areas, when encoded by descriptors, cluster nicely for all users, for a particular visualization tool.
- An algorithm for identifying interesting ROIs (scale/rotation invariant) in a new image, using this training data. We integrate this in the visualization tool to highlight areas of interest.
- A quantitative evaluation for an identification task (e.g., finding fiber bundle crossings) comparing the original and highlighting versions of a tractography application.
Abstract: TBD
Tentative title: Crowdsourcing visualization interaction: a taxonomy of interactions that Turkers do well and poorly
Expected Contribution(s):
- Demonstration of the use of crowdsourcing for interactive visual analytics.
- A characterization of analytical interaction tasks one might reliably crowdsource, as well as those that should be not be crowdsourced.
Expected Results:
- We designed experiments to test user performance over a general taxonomy of interaction types for information visualization (e.g., in this Yi and Stasko InfoVis paper).
- Found that X of these interactions can be reliably completed by novices, and Y cannot.
In a sense this is a follow-up to Heer and Bostock's 2010 CHI paper ("Crowdsourcing Graphical Perception...").
Abstract: TBD
Hypotheses on my mind
- Analysts spend more time w/attractive analytics than ugly ones
- Analysts feel more confident in the analyses they produce when using attractive/conforming ones versus ugly/unconventional ones
- Technology acceptance model (TAM) extends to visualization / visual analytics
- Is trust a factor? Is that part of perceived usefulness or perceived ease of use?
- More saccades with more uncertain data
- (Does the analyst spend more mental effort trying to build a narrative of what’s going on in the data when s/he is aware of uncertainty in the vis?)
- Viewing other analyses or past sessions from a visual tool will improve confidence in my own analysis or in the tool. (Not just the effect of training.)
- How do we communicate interesting findings in visualizations?
- If users took screenshots of interesting/persuasive views, can we learn what makes a good expository view?
- Check out CHI ‘11 paper - “The Impact of Social Information on Visual Judgments”
- We can make a useful dataset from images on wiki pages (scrape them) and learn something about how people do expository/explanatory visualization/narrative construction
- We can evaluate what makes a "high rated" visualization on ManyEyes.
- It’s easy to visually tell the difference between the “high rated” sorted list of visualizations there, and just the “most current”. I’m not sure the topic of the visualization is even that critical -- I could probably classify to a first approximation just by looking at the title syntax and the distribution of the visualization data.
Eni
A Comparative Study of Cognitive Functioning and Fiber Tract Integrity
Contributions
1. Establish a relationship between diffusion metrics and measurements of working memory and motor control.
i.Statistically comparing working memory test results with several quantitative tractography metrics of structural integrity in the SLF and the fornix
ii.Comparing motor control test results with the same metrics, in the SLF and the fornix
2. The paper draws on the relationship between axial or radial diffusivity and the nature of axonal damage, to infer the prevalent type of damage in tracts affected by CADASIL
Results
1.
i. Working memory test results should correlate with most metrics, in the SLF. They should not however correlate as much with the metrics, in the fornix, since CADASIL affects SLF more drastically than it affects the fornix.
ii. In the SLF, motor control test results should not correlate with the metrics as highly as the working memory test results do.
2. Among the metrics, there should be a few that correlate more significantly than others with the cognitive test result. These correlations should provide clues abut the type of damage that CADASIL causes. Also, they will potentially bring out the most effective metrics for the assessment of white matter integrity in CADASIL patients.
Tentative Abstract
In this study we examine the interdependence of cognitive test results and white matter integrity, as assessed by quantitative tractography metrics. Different fiber tracts in the brain are related to different congnitive functions, e.g. the SLF is related to working memory. Therefore atrophy in a certain tract leads to impaired performance in functions controlled by that tract. Knowing that CADASIL causes severe injury in the SLF, but not the fornix, we chose to compare the structural integrity of these tracts, as measured by several quatitative tractography metrics, against working memory test results and motor control test results. The n-back test was used to test working memory and the X-test was used to test motor control. The results of the n-back correlated well with the metrics, in the SLF. They did not correlate with the metrics, in the fornix. Among the metrics NTWLad expressed the highest correlation with the n-back test results, suggesting that there is more axonal loss than demyelination in the tract. The results of the X-test did not correlate significantly with any of the metrics, in the SLF. These results confirm that there is a strong relationship among performance in cognitive tests and white matter health, measured by quantitative tractography. Furthermore, they draw some light on the nature of axonal damage caused by CADASIL.
Trevor
Tentative Title(s):
- Extracting Semantic Content from Interaction Histories in 3D, Time-varying Visualizations
- Interaction Histories for Collaboration, Search, and Prediction in 3D, Time-Varying Visualizations
Contributions:
- Introduces a generalizable framework for automatically generating sharable, editable, searchable interaction histories in time-varying 3D applications.
- Demonstrates utility of Relational Markov Models (RMMs) in extracting semantic information from interaction histories, useful for prediction and automation in scientific exploration.
- Contributes the technical implementation details (software itself? open source project?) for applying said methods in pre-existing applications.
Results:
- Techniques were applied in 3 existing applications: Animal kinematics from CT & X-ray, bat flight kinematics from light capture, and (__?? brain stuff, wrist stuff??, maybe infovis stuff like proteomics??___)
- User evaluation of history generation matched user-defined histories in X% of cases. (Fully-automated, semi-automated, manual)
- Collect data on collaboration? Anecdotal evidence on how tools were used for collaboration? (Need to get on this quickly, with new data sets that are actively being explored. Talk to Beth, Sharon.)
- User study on task completion times with tools versus without tools.
- Relational models evaluated against survey data. i.e. User was trying to uncover this in series of interactions, system interpreted interactions as this or that.
(Need to think more about how to objectify the previous two bullets.)
Abstract: TBD.
References:
Çağatay
Coloring 3d line fields using Boy’s real projective plane immersion
Abstract:
It’s often useful to visualize a line field, a function that sends each point P of the plane or of space to a line through P; such fields arise in the study of tensor fields, where the principal eigendirection at each point determines a line (but not a vector, since if v is an eigenvector, so is −v). To visualize such a field, we often assign a color to each line; thus we consider the coloring of line fields as a mapping from the real projective plane (RP2) to color space. Ideally, such a coloring scheme should be smooth and one-to-one, so that the color uniquely identifies the line; unfortunately, there is not such mapping. We introduce Boy’s surface, an immersion of the projective plane in 3D, as a model for coloring line fields, and show results from its application in visualizing orientation in diffusion tensor fields. This coloring method is smooth and one to one except on a set of measure zero (the double curve of Boy’s surface).
Andy Forsberg, Jian, DHL
Vis '09 - 3D vector visualization methods paper / study
Abstract:
TBD..