CS295J/Experiment results from class 6: Difference between revisions
No edit summary |
|||
| Line 198: | Line 198: | ||
Trevor observations: | Trevor observations: | ||
- User #1, advection task. | |||
User #1, advection task. | - been using the application a while | ||
been using the application a while | - Lots of continuous rotation | ||
Lots of continuous rotation | - using her fingers to follow the flow in space (stereo?) | ||
using her fingers to follow the flow in space (stereo?) | - found a viewpoint she likes, rocking back and forth | ||
found a viewpoint she likes, rocking back and forth | - very brief period when data comes up to make an initial decision, then rotating | ||
very brief period when data comes up to make an initial decision, then rotating | - rotating left/right? | ||
rotating left/right? | - orthogonal to major features of the flow | ||
orthogonal to major features of the flow | - in swirling flow, seems to stop and look down the eye of the flow | ||
in swirling flow, seems to stop and look down the eye of the flow | - "couldn't see the cones" in the stereo tubes | ||
"couldn't see the cones" in the stereo tubes | - again in stereo, tracing paths with her fingers in space to determine advection. | ||
again in stereo, tracing paths with her fingers in space to determine advection. | - "lines everywhere!" comment with tubes lead to a complete guess... couldn't see the points of interest | ||
"lines everywhere!" comment with tubes lead to a complete guess... couldn't see the points of interest | - rarely looks away to interact with visualization, though conscious effort to enter the correct keys for answering task questions | ||
rarely looks away to interact with visualization, though conscious effort to enter the correct keys for answering task questions | |||
=== Discussion and conclusions === | === Discussion and conclusions === | ||
Editing... | Editing... | ||
Revision as of 18:17, 27 February 2009
David: interactively identifying brain pathways
Methods
Eni Halilaj, a CS PhD student, produced a 21 minute recording of herself running an interactive application named brainapp. This application displays hundreds to thousands of curbed cylindrical tubes that are calculated from medical imaging data and that represent coherent white matter. Her task was to select a subset of the tubes corresponding to two specific tracts: the corpus callosum and the corticospinal tract. To select a subset, she creates axis-aligned 3D boxes through which the tubes pass together with logical rules for which tubes to retain. She demonstrates one case where she selects the tubes that pass through one box but that miss another as well as a case where the tubes pass through two different boxes. Eni has used this program quite a bit, so is quick at it.
The video is online at CS, but is 4+Gb, so I did not put it in the wiki. It was taken on tripod and showed the computer display, keyboard, mouse, back of Eni's head, and her right arm and hand. It was recorded onto a miniDV videotape, and transferred to computer using Windows MovieMaker software. I played it back in PowerDVD, pausing and rewinding as necessary, while entering her actions. The time display was in seconds, and the detail is at that granularity.
Results
In the detail below the second column shows the time, the third column the activity following that time. The first column clusters those activities into higher-level types of actions according to the following key:
k - keyboarding
m - mousing
x - refering to external info
w - waiting
2 - 2d viewing and interaction (e.g., windows)
3 - 3d viewing and interaction
T - talking out loud in ways that are likely delaying the task
k 00:09 sits down k 00:12 text window pops up; moving text window around k 00:22 starts typing into text window k 00:44 still typing k 00:51 starts changing the subject name/location; is doing this by editing a text file, backspacing over something and then typing something new kx 01:07 looking over at some paper looking back at the screen still editing the subject info k 01:21 starting to start brainapp; typing at a command line prompt describing the options to the command k 01:57 still typing the options k 02:07 still typing the options w 02:17 an empty graphics window pops up m2 02:23 brings text window to the front, moves it to the corner w 02:29 done with that w 02:33 waiting w 02:37 waiting w 02:40 waiting w 02:50 waiting k 02:58 types in the text window in response to output w 03:08 additional bunch of text output came out m 03:14 a brain shows up in the graphics window m2 03:16 pops the graphics window in front of the text one m3 03:18 resizes the brain m3 03:25 rotating brain and bringing up dialog boxes; everything being done with the mouse; uses slider to adjust threshold, increasing/decreasing tube density m3 03:42 moves toward T 03:50 pointing at the corpus callosum; explaining what she'll do m3 04:05 adjusting box parameters in the menus slider bars m3 04:22 moved head forward to look very closely at the display m3x 04:28 still looking very closely, looking off to the left at paper m3 04:30 looking back at the display; adjusting some parameters. mT 04:44 rotating the brain around to different positions to bring in up them between the various menu windows; describing how to pick boxes so that they don't have too many incorrect tubes, and finally rotating around to verify that the box has good stuff in it m3 05:20 add ROI many more tubes come up trying to highlight the box, by clicking on it having difficulty because it's surrounded by some of the other things m3 05:49 box is now selected; all of the unselected tubes go away; rotating this around mT 05:58 pointing out some unwanted tubes mT 06:07 describing putting a new box to get rid of unwanted tubes dragging m3 06:15 dragging box faces around with sliders m3 06:24 moving, zooming in m3 06:28 adjusting box parameters m3 06:37 done with second box; looking to use it as a tool for removing unwanted tubes m3 06:47 rotating around to check for accuracy m3 06:52 rotating around to check for accuracy m3 06:57 identifies more tubes that are incorrect m3 06:76 working to place another box adjusting the edges at m3 07:14 carefully adjusting the faces m3 07:19 rotating around to check m3 07:22 rotating m3 07:26 more rotating m3 07:32 very intense study of the display while rotating things adjusting the box size left right with the mouse m3 07:50 saving the boxes for future use, closing the window k 07:56 the windows closed k 08:08 cortical spinal tract in the same subject starting up brainapp again w 08:10 waiting w 08:22 waiting w 08:30 waiting kw 08:37 something pops up in the text window types a response m2 08:49 window shows up with brainapp and she starts rotating the brain brings up menus m3 09:03 adjusting brain orientation m3 09:13 still adjusting the brain m3 09:22 still adjusting the brain m3 09:25 rotating around m3T 09:29 at the cortico-spinal tract and describing it m3 09:47 adjusting parameters of the box m3 09:54 adjust another parameter, adjusting to the sliders m3 09:59 another one m3 10:04 now rotating the brain around to look at the box that's been selected m3 10:12 continuing to rotate, continue to rotate and translate m3T 10:25 describing that it has some extra tracks and what should be done m3 10:35 selecting just the tracts going to the box m3T 10:45 pointing at the things that we want to get rid of m3 10:52 creating another box m3 10:58 moving it around and adjusting it m3 11:04 continuing to move it m3 11:10 now rotating the brain, duplicating that box and moving it over to the other side so that you can catch the pieces that are wrong on that side m3 11:31 and adjusting the third box little bit location wise m3 11:39 zooming in m3T 11:45 explaining how to combine boxes (tracts through 2) m2 11:51 select 2 boxes m2 12:00 bring up combinations dialog box m2 12:02 select 'and' m2 12:06 select other two m2 12:10 select 'and' T 12:12 explain m3 12:16 move head in and carefully study, zoom, rotate m3 12:21 explain extra fiber m3 12:31 make another box m3 12:42 adjusting parms m3 12:47 visually verifying k2 12:51 saving out w 12:56 saved and closed
Discussion and conclusions
- the list of time-labeled actions is unsatisfying -- I would like to see things on a timeline that is regularly spaced, but don't know how to do that.
- manually transcribing data like this is a pain -- I would like a way to capture various easy-to-capture events together with one or more video signals so that I can analyze offline. Combining that with a display would be really helpful.
- seeing the user's head was helpful. there were several cases where she looked off to the side or moved her head in close to the display; these were useful evidence of what she was thinking about
- this application requires a lot of waiting and keyboarding that seems wasteful
- the user was very focused on creating appropriate boxes to accomplish selection of the right tubes. it wasn't clear that a major change in this interface would actual speed up the process very much; much of the interaction time was spent in evaluating the results, which would likely be needed for any interaction method
- as an aside, some way of capturing the emotional state of a user could be useful: tired, frustrated, angry, bored, confused, etc. Perhaps camera-based?
- the appropriate level of granularity for this kind of work is unclear. At this level of seconds, there are interesting actions, but how to change them is unclear.
- the grouping of actions into categories seemed natural as I watched the tape. They were intended to cluster some of the actions into activities. They do not capture higher-level goals.
- for this program, the goal structure is something like:
- open the program
- select the subject
- loop
- define a box that refines the set of selected tubes by iteratively
- move one face with a slider
- possibly rotate 3D model to study result
- incorporate new box with any existing ones
- rotate 3D model to evaluate whether set of tubes is complete
- define a box that refines the set of selected tubes by iteratively
- save the results
- quit the program
Gideon: Modifying Microsoft PowerPoint Presentations
Methods
A Cognitive Science PhD student produced a 26 minute recording of herself and her computer using Microsoft PowerPoint. I created a custom task in which she was given two open powerpoint files named start.pptx and finish.pptx. Her goal was to alter start.pptx so that it matched finish.pptx. There were 7 slides with custom animations, transitions, text, and artwork. The participant was asked to explain her actions as she used the software, and the experimenter was sitting beside the user to address any concerns that might have come up.
The video was captured using iShowU HD for the Macintosh. The software captures voice, video (of the face), screen video, computer sound output, mouse clicks, and keyboard presses. A .mov file was output at default youtube quality settings (640x480).
Results
There is a lot of information in the output. The formatted movie output contains meta data about user information. I did not post the video due to its large size, it is on my computer.
Discussion and conclusions
- The user was used to a previous version of the software, so had to relearn the location of many features.
- One of the first things done was closing the formatting palette!
- Audio explanations were helpful for inferring the user's goals.
- I think switching between powerpoint files via keyboard shortcut (command+tilde) is much more productive than her technique (using the mouse, manually dragging and resizing windows).
- It was very interesting to see what she noticed and didn't notice.
- It was also interesting that she perceived differences that weren't there.
- She seemed to learn and develop a strategy to modify (format) text and objects.
- She did not bother with animation or transitions, interestingly.
Eric: Creating an Animation Using Maya
Methods
A Computer Science masters student was documented performing animation work for 30 minutes using the Maya application on a MacBook computer. Her task was to create a simple walking animation of a 3D model. The video shows her editing 3-4 specific frames of this animation.
The video was captured using iShowU (not HD) for the Macintosh. The software captures voice, screen video, computer sound output, and mouse clicks. A separate video was taken of the keyboard while the user was making edits. A .mov file for each of these videos was output. Compressed versions of these videos can be seen here:
Results
Editing...
Discussion and conclusions
Editing...
Trevor and Andrew: Analyzing Users in a 3D Flow Visualization Study
Methods
We examine workflows from two participants in a visualization user study comparing four methods for visualizing 3D vector fields. A streamline and a streamtube method were used to visualize a vector field in both monoscopic and stereoscopic viewing conditions. Five tasks were performed by each participant, though in our analysis we choose to focus on one specific task: determining whether one sample advects to another.
By the time participants came to the advection task, they had typically been using the visualization system for at least one hour. In this sense, they were moderately trained users, not complete beginners.
Also worth noting is the fact that in many instances in the video, it is difficult to determine whether or not the visualization is being displayed monoscopically or stereoscopically. We should be able to go back and access the visualization parameters from the user study and integrate them more fully with our findings.
Results
Trevor observations:
- User #1, advection task. - been using the application a while - Lots of continuous rotation - using her fingers to follow the flow in space (stereo?) - found a viewpoint she likes, rocking back and forth - very brief period when data comes up to make an initial decision, then rotating - rotating left/right? - orthogonal to major features of the flow - in swirling flow, seems to stop and look down the eye of the flow - "couldn't see the cones" in the stereo tubes - again in stereo, tracing paths with her fingers in space to determine advection. - "lines everywhere!" comment with tubes lead to a complete guess... couldn't see the points of interest - rarely looks away to interact with visualization, though conscious effort to enter the correct keys for answering task questions
Discussion and conclusions
Editing...