Gray, W. D., & Salzman, M. C. (1998a). Damaged merchandise? A review of experiments that compare usability evaluation methods. Human-Computer Interaction, 13(3), 203-261.
Damaged Merchandise? A Review of Experiments that Compare Usability Evaluation Methods
An interest in the design of interfaces has been a core topic for researchers and practitioners in the field of human-computer interaction (HCI); an interest in the design of experiments has not. To the extent that reliable and valid guidance for the former depends upon the results of the latter, it is necessary that researchers and practitioners understand how small features of an experimental design can cast large shadows over the results and conclusions that can be drawn. In this review we examine the design of five experiments that compared usability evaluation methods (UEMs). Each has had an important influence on HCI thought and practice. Unfortunately, our examination shows that small problems in the way these experiments were designed and conducted call into serious question what we thought we knew regarding the efficacy of various UEMs. If the influence of these experiments was trivial then such small problems could be safely ignored. Unfortunately, the outcomes of these experiments have been used to justify advice to practitioners regarding their choice of UEMs. Making such choices based upon misleading or erroneous claims can be detrimental--compromising the quality and integrity of the evaluation, incurring unnecessary costs, or undermining the practitioner's credibility within the design team. The experimental method is a potent vehicle that can help inform the choice of a UEM as well as help to address other HCI issues. However, to obtain the desired outcomes, close attention must be paid to experimental design.
Download Damaged Merchandise
Gray, W. D., & Salzman, M. C. (1998b). Repairing damaged merchandise: A rejoinder. Human-Computer Interaction, 13(3), 325-335.
W. D. Gray and M. C. Salzman respond to comments (see record 1998-03222-002) regarding their article (see record 1998-03222-001) criticizing research on usability evaluation methods as they apply to human-computer interaction. The present authors focus on several themes that emerged throughout their original article and the ensuing comments. What usability is, and how it is measured; the role of experiments vs other empirical studies in human-computer interaction; and how the value of a study is judged, are several of these themes. ((c) 1999 APA/PsycINFO, all rights reserved).
Download RejoinderBack to Home << Publications Visitors since 2004.12.08: