System Usability Scale (SUS)
The System Usability Scale (SUS) is a widely used questionnaire developed by John Brooke in 1986 at Digital Equipment Corporation (Brooke, 1996). The survey measures subjective assessments of system usability using a simple, ten item Likert scale. It's described as a "quick and dirty" usability measure that provides global assessments of systems usability across different contexts.
The System Usability Scale (SUS) is a ten item questionnaire developed by John Brooke in 1986 to measure system usability (Brooke, 1996). While it's become an industry standard for its simplicity and reliability, several design issues warrant examination. Question 1 asks "I think that I would like to use this system frequently," which assumes frequent use is desirable. This works for productivity tools but makes little sense for systems designed for occasional use like password resets or emergency applications. The term "frequently" is also subjective and undefined. A better approach would be asking "I would be willing to use this system when needed," which applies regardless of intended usage frequency.
Several questions use vague, subjective terms that different respondents interpret differently. "Unnecessarily complex" (Q2), "well integrated" (Q5), and "cumbersome" (Q8) mean different things to novice versus expert users. These terms were included to make SUS applicable across diverse systems, but this generality sacrifices precision (Fowler, 2014). Adding brief clarifications like "complex (requiring many steps)" would reduce interpretation variability without limiting the scale's broad applicability. Questions 4 and 10 assume technical support and learning resources are available, which isn't true for all contexts. Separating learnability from support needs would make these questions clearer.
The SUS's strength is also its limitation: it produces a single score but doesn't diagnose which usability aspects are problematic. A score of 55 indicates poor usability but doesn't reveal whether the issue is complexity, inconsistency, or learnability (Lewis and Sauro, 2009). To improve this, SUS could be supplemented with one or two open ended questions like "What frustrated you most?" This would add diagnostic value while maintaining brevity. The alternating positive/negative format prevents response bias, which is excellent design (Krosnick and Presser, 2010), but the reverse scoring formula confuses many researchers, leading to calculation errors. Automated scoring tools would solve this while preserving the bias prevention benefits.
Despite these limitations, SUS succeeds as a quick, reliable comparison tool. For a wiki homework entry, the key lesson is that questionnaire design involves tradeoffs. The SUS prioritizes speed and comparability over depth and specificity. Its widespread adoption proves that simple, well scoped questionnaires can deliver real value when their limitations are understood and acknowledged.
References
- Brooke, J. (1996) 'SUS: A quick and dirty usability scale', in Jordan, P.W., Thomas, B., Weerdmeester, B.A. and McClelland, I.L. (eds) Usability evaluation in industry. London: Taylor & Francis, pp. 189-194.
- Fowler, F.J. (2014) Survey research methods. 5th edn. Thousand Oaks: SAGE Publications.
- Krosnick, J.A. and Presser, S. (2010) 'Question and questionnaire design', in Marsden, P.V. and Wright, J.D. (eds) Handbook of survey research. 2nd edn. Bingley: Emerald Group, pp. 263-313.
- Lewis, J.R. and Sauro, J. (2009) 'The factor structure of the System Usability Scale', in Kurosu, M. (ed) Human centered design. Berlin: Springer, pp. 94-103.