CS295J/Experiment results from class 6: Difference between revisions
| (16 intermediate revisions by 3 users not shown) | |||
| Line 176: | Line 176: | ||
=== Results === | === Results === | ||
The following operator abbreviations were used: | |||
* mv multi-view | |||
* sv single-view | |||
* sc scroll | |||
* cf click on foot | |||
* sq square click | |||
* fp foot plane | |||
* r rotate focused item | |||
* sh shift focused item | |||
* rv rotate view | |||
* sf scroll through frames | |||
* mb menu bar selection | |||
* ge graph editor | |||
* sa select area | |||
* cw close window | |||
* man manip (right box) | |||
* cb click box | |||
* ed enter data | |||
* u undo | |||
* rc right click | |||
* ks key selected | |||
Task Time | |||
- 1 | |||
- 2 | |||
- Switch to Maya 3 | |||
- 4 | |||
- 5 | |||
- mv 6 | |||
- sv 7 | |||
- sc 8 | |||
- 9 | |||
- cf 10 | |||
- 11 | |||
- 12 | |||
- 13 | |||
- mv 14 | |||
- sv 15 | |||
- 16 | |||
- 17 | |||
- 18 | |||
- 19 | |||
- fp 20 | |||
- 21 | |||
- 22 | |||
- r 23 | |||
- 24 | |||
- sh 25 | |||
- 26 | |||
- 27 | |||
- 28 | |||
- 29 | |||
- 30 | |||
- 31 | |||
- mv, sv 32 | |||
- 33 | |||
- sq 34 | |||
- mv, sv 35 | |||
- 36 | |||
- 37 | |||
- sq 38 | |||
- sh 39 | |||
- 40 | |||
- 41 | |||
- sh 42 | |||
- 43 | |||
- 44 | |||
- switched to screen capture application 45 | |||
- returned to Maya 46 | |||
- mv 47 | |||
- sv 48 | |||
- 49 | |||
- mv, sv 50 | |||
- rv 51 | |||
- 52 | |||
- 53 | |||
- 54 | |||
- mv, sv 55 | |||
- 56 | |||
- 57 | |||
- 58 | |||
- 59 | |||
- sh 60 | |||
- 61 | |||
- 62 | |||
- 63 | |||
- 64 | |||
- 65 | |||
- 66 | |||
- 67 | |||
- 68 | |||
- View desktop 69 | |||
- return to Maya 70 | |||
- sf 71 | |||
- 72 | |||
- 73 | |||
- 74 | |||
- 75 | |||
- 76 | |||
- mv 77 | |||
- 78 | |||
- sv 79 | |||
- rv, sq 80 | |||
- mv 81 | |||
- 82 | |||
- mb 83 | |||
- 84 | |||
- 85 | |||
- 86 | |||
- 87 | |||
- 88 | |||
- 89 | |||
- ge 90 | |||
- 91 | |||
- sa 92 | |||
- 93 | |||
- 94 | |||
- 95 | |||
- cw 96 | |||
- sv 97 | |||
- 98 | |||
- 99 | |||
- 100 | |||
- 101 | |||
- 102 | |||
- 103 | |||
- 105 | |||
- 106 | |||
- sf 107 | |||
- 108 | |||
- 109 | |||
- 110 | |||
- 111 | |||
- man 112 | |||
- 113 | |||
- cb 114 | |||
- 115 | |||
- 116 | |||
- 117 | |||
- ed 118 | |||
- 119 | |||
- 120 | |||
- sv mv sv man cb 121 | |||
- ed 122 | |||
- 123 | |||
- 124 | |||
- cb 125 | |||
- 126 | |||
- cb 127 | |||
- 128 | |||
- 129 | |||
- 130 | |||
- sq 131 | |||
- man 132 | |||
- cb 133 | |||
- 134 | |||
- 135 | |||
- sq 136 | |||
- man cb ed 137 | |||
- 138 | |||
- 139 | |||
- 140 | |||
- sq 141 | |||
- man cb 142 | |||
- 143 | |||
- sq 144 | |||
- 145 | |||
- cb ed 146 | |||
- 147 | |||
- 148 | |||
- sq 149 | |||
- man sq 150 | |||
- 151 | |||
- 152 | |||
- sq 153 | |||
- 154 | |||
- cb ed 155 | |||
- 156 | |||
- rv 157 | |||
- 158 | |||
- u? 159 | |||
- 160 | |||
- sq 161 | |||
- 162 | |||
- 163 | |||
- man cb 164 | |||
- 165 | |||
- rc ks 166 | |||
- 167 | |||
- 168 | |||
- 169 | |||
- 170 | |||
- cb ed 171 | |||
- 172 | |||
- 173 | |||
- 174 | |||
- sq 175 | |||
- 176 | |||
- sq 177 | |||
- cb ed 178 | |||
- 179 | |||
- 180 | |||
- 181 | |||
- rv 182 | |||
- mv sv 183 | |||
- 184 | |||
- 185 | |||
- 186 | |||
- mv sv 187 | |||
- rv 188 | |||
- man cb ed 189 | |||
- 190 | |||
- 191 | |||
- 192 | |||
- 193 | |||
- 194 | |||
- 195 | |||
- sq 196 | |||
- 197 | |||
- cb ed 198 | |||
- 199 | |||
- 200 | |||
- sf 201 | |||
- 202 | |||
- 203 | |||
- 204 | |||
- sq 205 | |||
- 206 | |||
- cb ed 207 | |||
- 208 | |||
- 209 | |||
- 210 | |||
- sf 211 | |||
- 212 | |||
- 213 | |||
- 214 | |||
- 215 | |||
- 216 | |||
- 217 | |||
- 218 | |||
- 219 | |||
- 220 | |||
- cb ed 221 | |||
- 222 | |||
- 223 | |||
- sf 224 | |||
- 225 | |||
- 226 | |||
- 227 | |||
- 228 | |||
- man cb ed 229 | |||
- 230 | |||
- 231 | |||
- sf 232 | |||
- 233 | |||
- 234 | |||
- 235 | |||
- man cb ed 236 | |||
- 237 | |||
- 238 | |||
- 239 | |||
- 240 | |||
- 241 | |||
- sq 242 | |||
- man cb ed 243 | |||
- 244 | |||
- mv sv 245 | |||
- 246 | |||
- sf 247 | |||
- 248 | |||
- 249 | |||
- 250 | |||
- 251 | |||
- 252 | |||
- 253 | |||
- 254 | |||
- 255 | |||
- 256 | |||
- mv 257 | |||
- 258 | |||
- sv 259 | |||
- 260 | |||
- 261 | |||
- sq 262 | |||
- man cb 263 | |||
- 264 | |||
- 265 | |||
- ed 266 | |||
- 267 | |||
- 268 | |||
- 269 | |||
- 270 | |||
- mv sv 271 | |||
- 272 | |||
- 273 | |||
- 274 | |||
- sq 275 | |||
- 276 | |||
- 277 | |||
- 278 | |||
- sf 279 | |||
- 280 | |||
- 281 | |||
- 282 | |||
- 283 | |||
- switched to screen capture 284 | |||
- back to Maya 285 | |||
- mv sv 286 | |||
- 287 | |||
- man 288 | |||
- cb ed 289 | |||
- 290 | |||
- 291 | |||
- sh 292 | |||
- 293 | |||
- sh 294 | |||
- 295 | |||
- 296 | |||
- sh 297 | |||
- 298 | |||
- 299 | |||
- man cb 300 | |||
=== Discussion and conclusions === | === Discussion and conclusions === | ||
* Manually entering operators is annoying. I think this process has to be automated in order to get any value out of it. | |||
* It was interesting to see how using different hardware affected the user's actions. The user was always right clicking with the control button (although this didn't seem to slow her down, as her hand was always right there). | |||
* It's unclear how distracted the user was by being recorded, as she flipped back to the video screen twice. | |||
== Trevor and Andrew: Analyzing Users in a 3D Flow Visualization Study == | == Trevor and Andrew: Analyzing Users in a 3D Flow Visualization Study == | ||
| Line 230: | Line 556: | ||
=== Discussion and conclusions === | === Discussion and conclusions === | ||
We aim to find a consensus in the observations from the two participants and summarize them here. | |||
1. Users favored continuous rotation over static view or meditative pauses. | |||
2. Hand gesticulation seemed to be used as a method of validation in stereo views. | |||
3. Several verbal comments regarding occlusion suggest drawbacks to the tube view. | |||
4. Rotating is beneficial and perhaps necessary for this task. | |||
5. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen. | |||
6. Our assessment of a typical session beginning with a new dataset: | |||
#Data loads, split second decision to begin rotating. | |||
#Continuous rotation until proper viewpoint determined. | |||
#Rocking back and forth interaction about the ''optimal'' viewpoint. (Thinking?) | |||
#[In stereo views, participants were noted tilting their heads fairly consistently.] | |||
==EJ and Jon: Manipulating an image in Photoshop== | |||
===Methods=== | |||
We examine workflow from participants given an open-ended image manipulation task. Given one high-level creative task, the user was responsible for deciding the lower-level tasks necessarily to complete it (and was given the resources for several possible decisions, in the interest of time), deciding the operations necessary to complete those tasks, and discovering the interactions necessary to complete these operations (with basic terminology provided, again, in the interest of limited time). | |||
User had intermediate experience with Photoshop—knew concepts and capabilities, if not specific terminology (which was provided to the user, when asked) and functions (which the user needed to discover on his own). | |||
===Results=== | |||
"Second-level" operations for the entirety of the experiment broke down as follows, discretized into separate operations as roughly identified by Photoshop in the "language" of the particular application. The were a total of 72 operations performed by the user over the course of ~20 minutes. | |||
m 00:11 select v-j day image | |||
k 00:41 select all | |||
k 00:45 copy | |||
k 00:46 select main image, paste | |||
m 00:52 select move tool | |||
m 00:55 move v-j day layer | |||
m 02:08 select transform | |||
m 02:13 scale image | |||
m 02:20 change opacity | |||
m 02:44 move v-j layer | |||
k 02:55 commit transform | |||
m 03:07 new mask on v-j layer | |||
m 03:29 select parade image | |||
k 03:31 select all | |||
k 03:32 copy | |||
m 03:33 select main image | |||
k 03:34 paste | |||
m 03:37 move layer | |||
m 03:42 change layer opacity | |||
m 04:26 select magic wand tool | |||
mk 04:46 change tolerance | |||
m 04:50 click with magic wand | |||
k 04:51 undo | |||
mk 04:55 change tolerance | |||
m 04:57 click with magic wand | |||
k 04:59 undo | |||
mk 05:06 change tolerance | |||
m 05:06 click with magic wand | |||
k 05:09 undo | |||
mk 05:14 change tolerance | |||
m 05:14 click with magic wand | |||
k 05:12 undo | |||
m 05:26 click with magic wand | |||
k 05:30 undo | |||
m 05:40 click "contiguous" checkbox | |||
m 05:49 click with magic wand | |||
m 05:50 click with magic wand | |||
mk 05:51 change tolerance | |||
mk 06:00–07:50 rapid selection, undoing with magic wand | |||
m 08:00 select lasso tool | |||
m 08:25 select crop tool | |||
m 08:33 crop | |||
m 08:57 select lasso tool | |||
m 09:00–09:44 rapid selection of polygons | |||
m 10:06 select freehand lasso tool | |||
m 10:32–11:21 rapid selecting with freehand lasso tool | |||
m 11:28 cut | |||
m 11:25 paste as new layer | |||
m 11:45 delete layer | |||
m 12:10 change opacity | |||
m 12:40 select eraser tool | |||
m 12:48 change size of eraser tool | |||
m 12:57–13:05 erasing | |||
m 13:10 undo | |||
m 13:31 opacity | |||
m 13:38 opacity | |||
m 13:45–14:30 changing visibility of layers | |||
m 15:34–15:45 erasing | |||
m 15:56 move guy | |||
m 16:05–16:15 rapidly changing visibility of layers | |||
m 16:34 crop | |||
m 17:03 delete layer | |||
m 17:13 desperately trying to deselect | |||
m 18:00–18:28 Black and White dialogue | |||
m 18:34 opacity | |||
m 18:45–19:10 Add Noise dialogue | |||
m 20:00 select, copy v-j day image | |||
m 20:03 past v-j day selection | |||
m 20:20 scaling and moving | |||
m 20:47 scaling and moving again | |||
m 20:58 moving layer | |||
m 21:10 layer order | |||
m 21:15 erasing | |||
"Lowest level" operations, referring to the specific breakdown of the second-level application operations into their constituent clicks/keystrokes/etc., was not feasible given the timeframe. We transcribed two areas of particular interest, which can be mapped to operations above by timestamp. Given the granularity of these sections, we estimate that 250–750 interactions were performed over the ~20-minute task. | |||
m 00:05 expose (top-left corner) | |||
m 00:12 click ("V-J Day" image, double-click) | |||
k 00:33 shift-a | |||
m 00:34 right-click (on image) | |||
m 00:34 right-click (on image) | |||
m 00:35 click (on image title bar) | |||
m 00:37 click (on Edit menu) | |||
m 00:39 click (off menu) | |||
k 00:43 cmd-a | |||
k 00:45 cmd-c | |||
m 00:49 click ("modern" image title bar) | |||
k 00:49 cmd-v | |||
m 00:53 click (move tool) | |||
m 00:53 click-and-drag ("V-J Day") layer | |||
m 00:59 right-click (layer-menu) | |||
m 01:00 click image | |||
m 01:08 click (menu bar) | |||
m 01:11 click (menu bar) | |||
m 01:12 click image | |||
m 01:19 click menu bar | |||
m 01:48 click Layer > Layer Properties | |||
m 01:49 click Cancel | |||
m 01:53 right-click layer | |||
m 01:54 click off secondary menu | |||
m 02:03 click menu bar | |||
m 02:04 click Edit > Transform > Scale | |||
m 02:13 click and drag (corner of "V-J Day" layer) | |||
m 02:14 click opacity | |||
m 02:14 click opacity | |||
m 02:18 click-and-drag opacity | |||
m 02:24 click-and-drag move layer | |||
m 02:27 click-and-drag move history pane | |||
m 02:42 click-and-drag move layer | |||
m 02:48 click opacity | |||
m 02:48 click opacity | |||
m 02:48 click-and-drag move opacity | |||
k 02:54 enter | |||
m 03:04 right-click layer | |||
m 03:04 click off layer | |||
m 03:07 click New Mask button (layer pane) | |||
m 03:16 expose | |||
m 03:28 click "Suffrage Parade" image | |||
k 03:31 cmd-a | |||
k 03:31 cmd-c | |||
m 03:34 click title bar "modern" image | |||
k 03:35 cmd-v | |||
m 03:38 click-and-drag "Parade" layer | |||
m 03:40 click opacity | |||
m 03:40 click opacity | |||
m 03:54 click-and-drag opacity | |||
k 03:56 enter | |||
m 03:58 click layer eye | |||
m 03:58 click layer eye | |||
m 04:01 rapid (4) clicks on layer eyes | |||
m 04:12 click Background layer | |||
m 04:12 click "Parade" layer | |||
m 04:14 click Background layer | |||
m 16:41 click cutout guy layer | |||
m 16:41 click cutout guy layer | |||
m 16:41 click cutout guy layer | |||
m 16:42 click cancel on Layer Blending dialogue | |||
m 16:43 click cutout guy layer | |||
m 16:43 click layer | |||
m 16:49 expose | |||
m 16:49 click image | |||
m 16:49 click move tool | |||
m 16:52 mouse down on image | |||
m 16:53 click Yes in dialogue | |||
m 16:54 click eye on layer | |||
m 16:57 click-and-drag selection on image | |||
m 16:57 right-click layer | |||
m 16:59 click Delete Layer in secondary menu | |||
m 17:00 click Yes in dialogue | |||
m 17:06 click move tool | |||
m 17:06 click move tool | |||
k 17:07 esc | |||
m 17:07 click move tool | |||
k 17:07 esc | |||
k 17:07 esc | |||
k 17:07 esc | |||
m 17:11 click guy layer | |||
k 17:16 cmd-z | |||
k 17:16 cmd-a | |||
m 17:18 expose | |||
m 17:19 select "V-J Day" image | |||
Manual clustering of second-level operations into higher-level "subtasks" yielded the following breakdown of approx. 10 subtasks. | |||
00:00–00:30 strategizing | |||
00:30–03:30 integrating v-j day image into main image (scale, placement, opacity, mask) | |||
03:30–04:30 integrating parade into main image (opacity) | |||
04:30–12:00 attempting to select guy and move to separate layer (magic wand w/ different tolerances, polygon lasso tool, freehand lasso tool, eraser); crops halfway through to justify difficulty in removing text, but asks for approval; changes goal halfway through selection | |||
12:00 asks for clarification | |||
12:30–16:30 adjusting parade (opacity, eraser); crops again to avoid difficult integration, but apologizes | |||
16:30–17:30 frustration at results of cropping, associated with more rapid keypresses/mouseclicks | |||
17:30–19:30 adjusting guy (saturation, noise ["not immediately clear what it's doing"]) | |||
19:30–21:48 integrating v-j day image again (copy, paste, resize, eraser) | |||
At an even higher level, the user was significantly influenced by the resources provided for his use. His vocalizations and use of operating system functionality (particularly OS X's Expose) indicated mental clustering of the above subtasks into higher tasks associated by resource. It is meaningful to note that these higher-level tasks were approached in parallel—the user would often abandon an image subtask and return to it in a later subtask. | |||
00:00–21:48 main image | |||
00:30–03:30, 19:30–21:48 v-j day image | |||
03:30–04:30, 12:30–16:30 parade image | |||
===Discussion and conclusions=== | |||
* User would click the opacity tool twice every time, which tells me he didn't initially think that his first click had "worked"—efficiency in response is important | |||
* User would perform "visibility" operations, like expose or the layer eye, while thinking, perhaps so he could keep his mental image of all the program elements as fresh as possible | |||
* User selects tools that are already selected, especially in moments of peak frustration, and repeatedly tries operations that have no effect | |||
* User liked Expose, though he had never used it before (Windows user) | |||
* User was able to integrate Expose into common series of operations, such as select all/copy/select separate image/paste, which he was able to perform very quickly | |||
Latest revision as of 18:22, 4 March 2009
David: interactively identifying brain pathways
Methods
Eni Halilaj, a CS PhD student, produced a 21 minute recording of herself running an interactive application named brainapp. This application displays hundreds to thousands of curbed cylindrical tubes that are calculated from medical imaging data and that represent coherent white matter. Her task was to select a subset of the tubes corresponding to two specific tracts: the corpus callosum and the corticospinal tract. To select a subset, she creates axis-aligned 3D boxes through which the tubes pass together with logical rules for which tubes to retain. She demonstrates one case where she selects the tubes that pass through one box but that miss another as well as a case where the tubes pass through two different boxes. Eni has used this program quite a bit, so is quick at it.
The video is online at CS, but is 4+Gb, so I did not put it in the wiki. It was taken on tripod and showed the computer display, keyboard, mouse, back of Eni's head, and her right arm and hand. It was recorded onto a miniDV videotape, and transferred to computer using Windows MovieMaker software. I played it back in PowerDVD, pausing and rewinding as necessary, while entering her actions. The time display was in seconds, and the detail is at that granularity.
Results
In the detail below the second column shows the time, the third column the activity following that time. The first column clusters those activities into higher-level types of actions according to the following key:
k - keyboarding
m - mousing
x - refering to external info
w - waiting
2 - 2d viewing and interaction (e.g., windows)
3 - 3d viewing and interaction
T - talking out loud in ways that are likely delaying the task
k 00:09 sits down k 00:12 text window pops up; moving text window around k 00:22 starts typing into text window k 00:44 still typing k 00:51 starts changing the subject name/location; is doing this by editing a text file, backspacing over something and then typing something new kx 01:07 looking over at some paper looking back at the screen still editing the subject info k 01:21 starting to start brainapp; typing at a command line prompt describing the options to the command k 01:57 still typing the options k 02:07 still typing the options w 02:17 an empty graphics window pops up m2 02:23 brings text window to the front, moves it to the corner w 02:29 done with that w 02:33 waiting w 02:37 waiting w 02:40 waiting w 02:50 waiting k 02:58 types in the text window in response to output w 03:08 additional bunch of text output came out m 03:14 a brain shows up in the graphics window m2 03:16 pops the graphics window in front of the text one m3 03:18 resizes the brain m3 03:25 rotating brain and bringing up dialog boxes; everything being done with the mouse; uses slider to adjust threshold, increasing/decreasing tube density m3 03:42 moves toward T 03:50 pointing at the corpus callosum; explaining what she'll do m3 04:05 adjusting box parameters in the menus slider bars m3 04:22 moved head forward to look very closely at the display m3x 04:28 still looking very closely, looking off to the left at paper m3 04:30 looking back at the display; adjusting some parameters. mT 04:44 rotating the brain around to different positions to bring in up them between the various menu windows; describing how to pick boxes so that they don't have too many incorrect tubes, and finally rotating around to verify that the box has good stuff in it m3 05:20 add ROI many more tubes come up trying to highlight the box, by clicking on it having difficulty because it's surrounded by some of the other things m3 05:49 box is now selected; all of the unselected tubes go away; rotating this around mT 05:58 pointing out some unwanted tubes mT 06:07 describing putting a new box to get rid of unwanted tubes dragging m3 06:15 dragging box faces around with sliders m3 06:24 moving, zooming in m3 06:28 adjusting box parameters m3 06:37 done with second box; looking to use it as a tool for removing unwanted tubes m3 06:47 rotating around to check for accuracy m3 06:52 rotating around to check for accuracy m3 06:57 identifies more tubes that are incorrect m3 06:76 working to place another box adjusting the edges at m3 07:14 carefully adjusting the faces m3 07:19 rotating around to check m3 07:22 rotating m3 07:26 more rotating m3 07:32 very intense study of the display while rotating things adjusting the box size left right with the mouse m3 07:50 saving the boxes for future use, closing the window k 07:56 the windows closed k 08:08 cortical spinal tract in the same subject starting up brainapp again w 08:10 waiting w 08:22 waiting w 08:30 waiting kw 08:37 something pops up in the text window types a response m2 08:49 window shows up with brainapp and she starts rotating the brain brings up menus m3 09:03 adjusting brain orientation m3 09:13 still adjusting the brain m3 09:22 still adjusting the brain m3 09:25 rotating around m3T 09:29 at the cortico-spinal tract and describing it m3 09:47 adjusting parameters of the box m3 09:54 adjust another parameter, adjusting to the sliders m3 09:59 another one m3 10:04 now rotating the brain around to look at the box that's been selected m3 10:12 continuing to rotate, continue to rotate and translate m3T 10:25 describing that it has some extra tracks and what should be done m3 10:35 selecting just the tracts going to the box m3T 10:45 pointing at the things that we want to get rid of m3 10:52 creating another box m3 10:58 moving it around and adjusting it m3 11:04 continuing to move it m3 11:10 now rotating the brain, duplicating that box and moving it over to the other side so that you can catch the pieces that are wrong on that side m3 11:31 and adjusting the third box little bit location wise m3 11:39 zooming in m3T 11:45 explaining how to combine boxes (tracts through 2) m2 11:51 select 2 boxes m2 12:00 bring up combinations dialog box m2 12:02 select 'and' m2 12:06 select other two m2 12:10 select 'and' T 12:12 explain m3 12:16 move head in and carefully study, zoom, rotate m3 12:21 explain extra fiber m3 12:31 make another box m3 12:42 adjusting parms m3 12:47 visually verifying k2 12:51 saving out w 12:56 saved and closed
Discussion and conclusions
- the list of time-labeled actions is unsatisfying -- I would like to see things on a timeline that is regularly spaced, but don't know how to do that.
- manually transcribing data like this is a pain -- I would like a way to capture various easy-to-capture events together with one or more video signals so that I can analyze offline. Combining that with a display would be really helpful.
- seeing the user's head was helpful. there were several cases where she looked off to the side or moved her head in close to the display; these were useful evidence of what she was thinking about
- this application requires a lot of waiting and keyboarding that seems wasteful
- the user was very focused on creating appropriate boxes to accomplish selection of the right tubes. it wasn't clear that a major change in this interface would actual speed up the process very much; much of the interaction time was spent in evaluating the results, which would likely be needed for any interaction method
- as an aside, some way of capturing the emotional state of a user could be useful: tired, frustrated, angry, bored, confused, etc. Perhaps camera-based?
- the appropriate level of granularity for this kind of work is unclear. At this level of seconds, there are interesting actions, but how to change them is unclear.
- the grouping of actions into categories seemed natural as I watched the tape. They were intended to cluster some of the actions into activities. They do not capture higher-level goals.
- for this program, the goal structure is something like:
- open the program
- select the subject
- loop
- define a box that refines the set of selected tubes by iteratively
- move one face with a slider
- possibly rotate 3D model to study result
- incorporate new box with any existing ones
- rotate 3D model to evaluate whether set of tubes is complete
- define a box that refines the set of selected tubes by iteratively
- save the results
- quit the program
Gideon: Modifying Microsoft PowerPoint Presentations
Methods
A Cognitive Science PhD student produced a 26 minute recording of herself and her computer using Microsoft PowerPoint. I created a custom task in which she was given two open powerpoint files named start.pptx and finish.pptx. Her goal was to alter start.pptx so that it matched finish.pptx. There were 7 slides with custom animations, transitions, text, and artwork. The participant was asked to explain her actions as she used the software, and the experimenter was sitting beside the user to address any concerns that might have come up.
The video was captured using iShowU HD for the Macintosh. The software captures voice, video (of the face), screen video, computer sound output, mouse clicks, and keyboard presses. A .mov file was output at default youtube quality settings (640x480).
Results
There is a lot of information in the output. The formatted movie output contains meta data about user information. I did not post the video due to its large size, it is on my computer.
Discussion and conclusions
- The user was used to a previous version of the software, so had to relearn the location of many features.
- One of the first things done was closing the formatting palette!
- Audio explanations were helpful for inferring the user's goals.
- I think switching between powerpoint files via keyboard shortcut (command+tilde) is much more productive than her technique (using the mouse, manually dragging and resizing windows).
- It was very interesting to see what she noticed and didn't notice.
- It was also interesting that she perceived differences that weren't there.
- She seemed to learn and develop a strategy to modify (format) text and objects.
- She did not bother with animation or transitions, interestingly.
Eric: Creating an Animation Using Maya
Methods
A Computer Science masters student was documented performing animation work for 30 minutes using the Maya application on a MacBook computer. Her task was to create a simple walking animation of a 3D model. The video shows her editing 3-4 specific frames of this animation.
The video was captured using iShowU (not HD) for the Macintosh. The software captures voice, screen video, computer sound output, and mouse clicks. A separate video was taken of the keyboard while the user was making edits. A .mov file for each of these videos was output. Compressed versions of these videos can be seen here:
Results
The following operator abbreviations were used:
- mv multi-view
- sv single-view
- sc scroll
- cf click on foot
- sq square click
- fp foot plane
- r rotate focused item
- sh shift focused item
- rv rotate view
- sf scroll through frames
- mb menu bar selection
- ge graph editor
- sa select area
- cw close window
- man manip (right box)
- cb click box
- ed enter data
- u undo
- rc right click
- ks key selected
Task Time - 1 - 2 - Switch to Maya 3 - 4 - 5 - mv 6 - sv 7 - sc 8 - 9 - cf 10 - 11 - 12 - 13 - mv 14 - sv 15 - 16 - 17 - 18 - 19 - fp 20 - 21 - 22 - r 23 - 24 - sh 25 - 26 - 27 - 28 - 29 - 30 - 31 - mv, sv 32 - 33 - sq 34 - mv, sv 35 - 36 - 37 - sq 38 - sh 39 - 40 - 41 - sh 42 - 43 - 44 - switched to screen capture application 45 - returned to Maya 46 - mv 47 - sv 48 - 49 - mv, sv 50 - rv 51 - 52 - 53 - 54 - mv, sv 55 - 56 - 57 - 58 - 59 - sh 60 - 61 - 62 - 63 - 64 - 65 - 66 - 67 - 68 - View desktop 69 - return to Maya 70 - sf 71 - 72 - 73 - 74 - 75 - 76 - mv 77 - 78 - sv 79 - rv, sq 80 - mv 81 - 82 - mb 83 - 84 - 85 - 86 - 87 - 88 - 89 - ge 90 - 91 - sa 92 - 93 - 94 - 95 - cw 96 - sv 97 - 98 - 99 - 100 - 101 - 102 - 103 - 105 - 106 - sf 107 - 108 - 109 - 110 - 111 - man 112 - 113 - cb 114 - 115 - 116 - 117 - ed 118 - 119 - 120 - sv mv sv man cb 121 - ed 122 - 123 - 124 - cb 125 - 126 - cb 127 - 128 - 129 - 130 - sq 131 - man 132 - cb 133 - 134 - 135 - sq 136 - man cb ed 137 - 138 - 139 - 140 - sq 141 - man cb 142 - 143 - sq 144 - 145 - cb ed 146 - 147 - 148 - sq 149 - man sq 150 - 151 - 152 - sq 153 - 154 - cb ed 155 - 156 - rv 157 - 158 - u? 159 - 160 - sq 161 - 162 - 163 - man cb 164 - 165 - rc ks 166 - 167 - 168 - 169 - 170 - cb ed 171 - 172 - 173 - 174 - sq 175 - 176 - sq 177 - cb ed 178 - 179 - 180 - 181 - rv 182 - mv sv 183 - 184 - 185 - 186 - mv sv 187 - rv 188 - man cb ed 189 - 190 - 191 - 192 - 193 - 194 - 195 - sq 196 - 197 - cb ed 198 - 199 - 200 - sf 201 - 202 - 203 - 204 - sq 205 - 206 - cb ed 207 - 208 - 209 - 210 - sf 211 - 212 - 213 - 214 - 215 - 216 - 217 - 218 - 219 - 220 - cb ed 221 - 222 - 223 - sf 224 - 225 - 226 - 227 - 228 - man cb ed 229 - 230 - 231 - sf 232 - 233 - 234 - 235 - man cb ed 236 - 237 - 238 - 239 - 240 - 241 - sq 242 - man cb ed 243 - 244 - mv sv 245 - 246 - sf 247 - 248 - 249 - 250 - 251 - 252 - 253 - 254 - 255 - 256 - mv 257 - 258 - sv 259 - 260 - 261 - sq 262 - man cb 263 - 264 - 265 - ed 266 - 267 - 268 - 269 - 270 - mv sv 271 - 272 - 273 - 274 - sq 275 - 276 - 277 - 278 - sf 279 - 280 - 281 - 282 - 283 - switched to screen capture 284 - back to Maya 285 - mv sv 286 - 287 - man 288 - cb ed 289 - 290 - 291 - sh 292 - 293 - sh 294 - 295 - 296 - sh 297 - 298 - 299 - man cb 300
Discussion and conclusions
- Manually entering operators is annoying. I think this process has to be automated in order to get any value out of it.
- It was interesting to see how using different hardware affected the user's actions. The user was always right clicking with the control button (although this didn't seem to slow her down, as her hand was always right there).
- It's unclear how distracted the user was by being recorded, as she flipped back to the video screen twice.
Trevor and Andrew: Analyzing Users in a 3D Flow Visualization Study
Methods
We examine workflows from two participants in a visualization user study comparing four methods for visualizing 3D vector fields. A streamline and a streamtube method were used to visualize a vector field in both monoscopic and stereoscopic viewing conditions. Five tasks were performed by each participant, though in our analysis we choose to focus on one specific task: determining whether one sample advects to another.
By the time participants came to the advection task, they had typically been using the visualization system for at least one hour. In this sense, they were moderately trained users, not complete beginners.
Also worth noting is the fact that in many instances in the video, it is difficult to determine whether or not the visualization is being displayed monoscopically or stereoscopically. We should be able to go back and access the visualization parameters from the user study and integrate them more fully with our findings.
Results
Trevor observations
- been using the application a while - Lots of continuous rotation - using fingers to follow the flow in space (stereo?) - found a viewpoint participants liked, rocking back and forth - very brief period when data comes up to make an initial decision, then rotating - rotating left/right? - orthogonal to major features of the flow - in swirling flow, seems to stop and look down the eye of the flow - "couldn't see the cones" in the stereo tubes - again in stereo, tracing paths with their fingers in space to determine advection. - "lines everywhere!" comment with tubes lead to a complete guess... couldn't see the points of interest - rarely looks away to interact with visualization, though conscious effort to enter the correct keys for answering task questions
Andrew observations
- does not stop to look at keyboard - rotates continuously for first two - does not stop rotation for breaks - doesn't really stop and look participants rotate continuously - participants tipped head to the left to see it from a diff angle - rocking behavior where participants tip head back and forth (left and right) - tends to rotate orthoganol to the dominant axis of the flow - traces things with their fingers - gesticulates sometimes - roughly equal ammount of rotation for tubes vs lines - "couldn't see the cones too many lines" (under tubes condition) - more hand/finger gesticulation (stereo) - participants never looks at the keyboard - participants never look away from the visualization to ponder abstractly - particpants consistently angles head (as if there is something beneficial from doing that)
Discussion and conclusions
We aim to find a consensus in the observations from the two participants and summarize them here.
1. Users favored continuous rotation over static view or meditative pauses.
2. Hand gesticulation seemed to be used as a method of validation in stereo views.
3. Several verbal comments regarding occlusion suggest drawbacks to the tube view.
4. Rotating is beneficial and perhaps necessary for this task.
5. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.
6. Our assessment of a typical session beginning with a new dataset:
- Data loads, split second decision to begin rotating.
- Continuous rotation until proper viewpoint determined.
- Rocking back and forth interaction about the optimal viewpoint. (Thinking?)
- [In stereo views, participants were noted tilting their heads fairly consistently.]
EJ and Jon: Manipulating an image in Photoshop
Methods
We examine workflow from participants given an open-ended image manipulation task. Given one high-level creative task, the user was responsible for deciding the lower-level tasks necessarily to complete it (and was given the resources for several possible decisions, in the interest of time), deciding the operations necessary to complete those tasks, and discovering the interactions necessary to complete these operations (with basic terminology provided, again, in the interest of limited time).
User had intermediate experience with Photoshop—knew concepts and capabilities, if not specific terminology (which was provided to the user, when asked) and functions (which the user needed to discover on his own).
Results
"Second-level" operations for the entirety of the experiment broke down as follows, discretized into separate operations as roughly identified by Photoshop in the "language" of the particular application. The were a total of 72 operations performed by the user over the course of ~20 minutes.
m 00:11 select v-j day image k 00:41 select all k 00:45 copy k 00:46 select main image, paste m 00:52 select move tool m 00:55 move v-j day layer m 02:08 select transform m 02:13 scale image m 02:20 change opacity m 02:44 move v-j layer k 02:55 commit transform m 03:07 new mask on v-j layer m 03:29 select parade image k 03:31 select all k 03:32 copy m 03:33 select main image k 03:34 paste m 03:37 move layer m 03:42 change layer opacity m 04:26 select magic wand tool mk 04:46 change tolerance m 04:50 click with magic wand k 04:51 undo mk 04:55 change tolerance m 04:57 click with magic wand k 04:59 undo mk 05:06 change tolerance m 05:06 click with magic wand k 05:09 undo mk 05:14 change tolerance m 05:14 click with magic wand k 05:12 undo m 05:26 click with magic wand k 05:30 undo m 05:40 click "contiguous" checkbox m 05:49 click with magic wand m 05:50 click with magic wand mk 05:51 change tolerance mk 06:00–07:50 rapid selection, undoing with magic wand m 08:00 select lasso tool m 08:25 select crop tool m 08:33 crop m 08:57 select lasso tool m 09:00–09:44 rapid selection of polygons m 10:06 select freehand lasso tool m 10:32–11:21 rapid selecting with freehand lasso tool m 11:28 cut m 11:25 paste as new layer m 11:45 delete layer m 12:10 change opacity m 12:40 select eraser tool m 12:48 change size of eraser tool m 12:57–13:05 erasing m 13:10 undo m 13:31 opacity m 13:38 opacity m 13:45–14:30 changing visibility of layers m 15:34–15:45 erasing m 15:56 move guy m 16:05–16:15 rapidly changing visibility of layers m 16:34 crop m 17:03 delete layer m 17:13 desperately trying to deselect m 18:00–18:28 Black and White dialogue m 18:34 opacity m 18:45–19:10 Add Noise dialogue m 20:00 select, copy v-j day image m 20:03 past v-j day selection m 20:20 scaling and moving m 20:47 scaling and moving again m 20:58 moving layer m 21:10 layer order m 21:15 erasing
"Lowest level" operations, referring to the specific breakdown of the second-level application operations into their constituent clicks/keystrokes/etc., was not feasible given the timeframe. We transcribed two areas of particular interest, which can be mapped to operations above by timestamp. Given the granularity of these sections, we estimate that 250–750 interactions were performed over the ~20-minute task.
m 00:05 expose (top-left corner)
m 00:12 click ("V-J Day" image, double-click)
k 00:33 shift-a
m 00:34 right-click (on image)
m 00:34 right-click (on image)
m 00:35 click (on image title bar)
m 00:37 click (on Edit menu)
m 00:39 click (off menu)
k 00:43 cmd-a
k 00:45 cmd-c
m 00:49 click ("modern" image title bar)
k 00:49 cmd-v
m 00:53 click (move tool)
m 00:53 click-and-drag ("V-J Day") layer
m 00:59 right-click (layer-menu)
m 01:00 click image
m 01:08 click (menu bar)
m 01:11 click (menu bar)
m 01:12 click image
m 01:19 click menu bar
m 01:48 click Layer > Layer Properties
m 01:49 click Cancel
m 01:53 right-click layer
m 01:54 click off secondary menu
m 02:03 click menu bar
m 02:04 click Edit > Transform > Scale
m 02:13 click and drag (corner of "V-J Day" layer)
m 02:14 click opacity
m 02:14 click opacity
m 02:18 click-and-drag opacity
m 02:24 click-and-drag move layer
m 02:27 click-and-drag move history pane
m 02:42 click-and-drag move layer
m 02:48 click opacity
m 02:48 click opacity
m 02:48 click-and-drag move opacity
k 02:54 enter
m 03:04 right-click layer
m 03:04 click off layer
m 03:07 click New Mask button (layer pane)
m 03:16 expose
m 03:28 click "Suffrage Parade" image
k 03:31 cmd-a
k 03:31 cmd-c
m 03:34 click title bar "modern" image
k 03:35 cmd-v
m 03:38 click-and-drag "Parade" layer
m 03:40 click opacity
m 03:40 click opacity
m 03:54 click-and-drag opacity
k 03:56 enter
m 03:58 click layer eye
m 03:58 click layer eye
m 04:01 rapid (4) clicks on layer eyes
m 04:12 click Background layer
m 04:12 click "Parade" layer
m 04:14 click Background layer
m 16:41 click cutout guy layer m 16:41 click cutout guy layer m 16:41 click cutout guy layer m 16:42 click cancel on Layer Blending dialogue m 16:43 click cutout guy layer m 16:43 click layer m 16:49 expose m 16:49 click image m 16:49 click move tool m 16:52 mouse down on image m 16:53 click Yes in dialogue m 16:54 click eye on layer m 16:57 click-and-drag selection on image m 16:57 right-click layer m 16:59 click Delete Layer in secondary menu m 17:00 click Yes in dialogue m 17:06 click move tool m 17:06 click move tool k 17:07 esc m 17:07 click move tool k 17:07 esc k 17:07 esc k 17:07 esc m 17:11 click guy layer k 17:16 cmd-z k 17:16 cmd-a m 17:18 expose m 17:19 select "V-J Day" image
Manual clustering of second-level operations into higher-level "subtasks" yielded the following breakdown of approx. 10 subtasks.
00:00–00:30 strategizing 00:30–03:30 integrating v-j day image into main image (scale, placement, opacity, mask) 03:30–04:30 integrating parade into main image (opacity) 04:30–12:00 attempting to select guy and move to separate layer (magic wand w/ different tolerances, polygon lasso tool, freehand lasso tool, eraser); crops halfway through to justify difficulty in removing text, but asks for approval; changes goal halfway through selection 12:00 asks for clarification 12:30–16:30 adjusting parade (opacity, eraser); crops again to avoid difficult integration, but apologizes 16:30–17:30 frustration at results of cropping, associated with more rapid keypresses/mouseclicks 17:30–19:30 adjusting guy (saturation, noise ["not immediately clear what it's doing"]) 19:30–21:48 integrating v-j day image again (copy, paste, resize, eraser)
At an even higher level, the user was significantly influenced by the resources provided for his use. His vocalizations and use of operating system functionality (particularly OS X's Expose) indicated mental clustering of the above subtasks into higher tasks associated by resource. It is meaningful to note that these higher-level tasks were approached in parallel—the user would often abandon an image subtask and return to it in a later subtask.
00:00–21:48 main image 00:30–03:30, 19:30–21:48 v-j day image 03:30–04:30, 12:30–16:30 parade image
Discussion and conclusions
- User would click the opacity tool twice every time, which tells me he didn't initially think that his first click had "worked"—efficiency in response is important
- User would perform "visibility" operations, like expose or the layer eye, while thinking, perhaps so he could keep his mental image of all the program elements as fresh as possible
- User selects tools that are already selected, especially in moments of peak frustration, and repeatedly tries operations that have no effect
- User liked Expose, though he had never used it before (Windows user)
- User was able to integrate Expose into common series of operations, such as select all/copy/select separate image/paste, which he was able to perform very quickly