CS295J/Kalafarski week 1
I am not sure where to put this 250-word summary, so it is going here.
"State of the Art" presents a comprehensive taxonomy for automating usability analysis. They define interface evaluation and its purposes, the primary of which is identifying specific problems with interfaces. Advantages of automated usability evaluation (AUE) are purported to be efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design. AUE can be embedded into the design phase of the interface, while more traditional methods (such as formal user testing and informal use) must occur after implementation and significant resources have been committed. They assess many current techniques, breaking them into five classes: testing, inspection, inquiry, analytical modeling, and simulation. Of these, only the first two identify specific problems; the rest generate a summative analysis.
In their exhaustive survey, the researchers identify about two dozen techniques, classify them, and identify their frequency of use in automated systems. They find that the vast majority of techniques are used in non-automated assessments: 64% of techniques have never been employed in an automated manner, and only 22% use absolutely no formal testing or informal use.
While I initially feared that the existence of this paper meant much of the work we planned has been explored already, there appears to be much still to be done (at least as of 2002). Many of these techniques have never been automated, even at the capture level. Although quantitative assessment is touched on minimally, I believe that as a library of methods and their "automatibility" this paper will be a very strong and frequent reference for us.