Measure perceived usability with a standardized questionnaire that enables benchmarking across products and releases.
The System Usability Scale (SUS) is a standardized 10-question survey that produces a single usability score from 0 to 100 for benchmarking and comparison.
The System Usability Scale (SUS) is a standardized 10-question survey that produces a single composite score between 0 and 100, representing how usable participants perceive a system to be. Developed by John Brooke in 1986, it has become one of the most widely used usability assessment tools in UX research and product development. UX researchers, product managers, and quality assurance teams administer SUS immediately after usability sessions to capture participants' overall impressions while the experience is fresh. The scoring formula is well established, and the extensive body of benchmark data (a score above 68 is considered above average) makes it easy to contextualize results against industry norms. Teams rely on SUS to track usability improvements across releases, compare competing design alternatives with a consistent metric, and give stakeholders a clear quantitative answer to the question of how usable a product is. Because it takes only a few minutes to complete and requires no specialized training to administer, SUS integrates seamlessly into both moderated and unmoderated testing workflows, making it accessible to teams of all sizes.
SUS is a simple, 10-item questionnaire that provides a global view of the perceived usability of a product or system. Each question contributes to a total score out of 100, which can be used to measure overall usability.
Create a questionnaire with the 10 standard SUS questions. 5 questions are positively worded, while the remaining 5 are negatively worded. Each question should be answerable using a 5-point Likert scale, ranging from Strongly Disagree (1) to Strongly Agree (5).
Identify and recruit a representative sample of users who will test the product or system. The number of participants may vary depending on the project size, but it's recommended to have at least 12–20 participants.
Ask participants to complete a series of tasks using the product or system. Record their interactions and observe how they interact with the interface to identify usability issues or areas for improvement.
After completing usability testing, ask participants to fill out the SUS questionnaire to obtain their feedback on the perceived usability of the product or system.
To calculate the SUS score, first subtract 1 from the response values of odd-numbered items (positively worded) and subtract the response values of even-numbered items from 5 (negatively worded). Then sum up the new values and multiply by 2.5. The resulting value is the total SUS score out of 100.
Examine the total SUS scores, as well as individual response patterns, to identify areas of high and low perceived usability. A higher SUS score indicates better usability, with a score above 68 considered above average.
Compile the findings from the usability testing and the SUS questionnaire in a comprehensive report. Use these findings to make recommendations for improving the product or system's usability.
Make the recommended improvements to the product or system and conduct additional rounds of usability testing and SUS questionnaires to measure the impact of these changes on the perceived usability.
After administering the System Usability Scale, your team will have a single composite usability score for each participant and an aggregated score for the product overall. You can compare this score against the industry average of 68 and use established grade scales to communicate results to stakeholders. Individual question breakdowns will reveal which dimensions of usability are strongest and weakest, such as perceived complexity, need for support, or consistency. When conducted across releases, SUS scores provide a clear trend line showing whether design changes are improving the user experience. The standardized nature of the results makes them credible and easy to communicate, giving your team quantitative evidence to support design decisions and prioritize usability improvements.
Administer SUS immediately after task completion while the experience is fresh in participants' minds.
Use the standard 10 questions exactly as written without modifying wording to preserve the scale's validity.
A SUS score of 68 is the historical average; scores above 80 are considered excellent usability.
Compare scores across releases rather than fixating on absolute numbers to track meaningful improvement.
Combine SUS with qualitative follow-up questions to understand the reasons behind the numeric scores.
Report both the overall score and individual question breakdowns to identify specific usability dimensions.
Ensure participants interact with the product before completing the survey because SUS measures perceived usability.
Calculate confidence intervals when comparing scores between designs to determine statistical significance.
Changing the wording of SUS questions invalidates the scale's psychometric properties and makes benchmark comparisons meaningless. Always use the original 10 questions exactly as published by John Brooke.
The SUS score is not a percentage and should not be treated as one. A score of 68 does not mean 68% usability. Use established grade scales (A through F) or adjective ratings (excellent, good, poor) for proper interpretation.
Waiting hours or days after the usability session to administer SUS allows participants' impressions to fade and introduces recall bias. Always administer the questionnaire immediately after task completion while the experience is vivid.
Reporting only the composite score without analyzing individual question responses hides valuable diagnostic information. Examine which specific items score low to understand whether issues relate to complexity, consistency, learnability, or confidence.
Standardized 10-item questionnaire measuring perceived system usability.
Summary of participant background information for segmented analysis.
Outline of when and how SUS surveys will be administered to users.
Aggregated SUS scores with individual item breakdowns per user group.
Comparison of SUS scores against industry averages and prior releases.
Analysis of results with actionable recommendations for improvements.
Charts and graphs communicating usability scores to stakeholders.
Tracking of SUS scores over time to assess improvement trends.