The SUPR-Q: primer and reflections

The Standardised Usability Percentile Rank Questionnaire (SUPR-Q) — primer and reflections for this single, quantitative, measure of user experience.

What is the SUPR-Q?

The Standardised Usability Percentile Rank Questionnaire (SUPR-Q) is an attempt to produce a comprehensive, reliable and valid measure of the user experience (UX) of a product or service.

It has been developed by measuringU who have significant experience in developing approaches for reliably measuring and reporting the user experience.

Most interestingly, since it’s a standardised measure, it’s possible to rank a score against several hundred products, services or organisations across a wide range of industries.

Whilst the SUPR-Q produces a single score (from 0 to 100, with an average score of 50), it also generates sub-measures for…

  • usability,
  • appearance,
  • trust/credibility,
  • loyalty,

…and so it goes beyond assessing only a single factor of user experience. This helps produce a more valid measure of the true perception of a user experience.

The SUPR-Q: questions and (factor)

The SUPR-Q asks 8 standardised questions, including Net Promoter Score (NPS):

  1. The website is easy to use (usability).
  2. It is easy to navigate within the website (usability).
  3. I feel comfortable purchasing from the website (trust).
  4. I feel confident conducting business on the website (trust).
  5. How likely are you to recommend this website to a friend or colleague? (NPS, loyalty)
  6. I will likely return to the website in the future (loyalty).
  7. I find the website to be attractive (appearance).
  8. The website has a clean and simple presentation (appearance).

SUPR-Q screenshots, survey and report

SUPR-Q strengths

  • simple, standardised, valid and robust,
  • considers experience measures beyond than a single factor (like usability only),
  • removes scope for error in homegrown survey design,
  • (generally) easy to administer and complete an 8-question web survey,
  • benchmarking database is updated quarterly,
  • provides product and leadership teams with a reliable, ranked, business measure,
  • more useful/meaningful than the discredited NPS,
  • easy to track macro-trends in the UX of a product, service or site over time,
  • can help ‘sell’ and visualise the importance of UX across an organisation.

SUPR-Q limitations

  • dataset of organisations has a North American bias,
    • moreover, some sectors are missing i.e., not-for-profits,
    • you can only see comparable organisations after purchasing the SUPR-Q,
  • doesn’t capture qualitative insights (the proverbial ‘why’),
  • SUPR-Q is not a widely understood business measure, especially for senior leaders and/or those used to familiar, albeit flawed, measures like NPS,
  • SUPR-Q is run through an Excel workbook, which I’ve found problematic to access,
  • workbooks from previous SUPR-Qs expire, making it hard to go back to view previous SUPR-Q insights,
  • the automatically generated SUPR-Q report is basic (you’ll likely need to abstract this into your own report dashboard),
  • manual export/import and normalisation of survey data prior is a minor chore.

Things to consider

  • quantitative measures can’t diagnose faults, problems, or tell you what to do,
    • (SUPR-Q is certainly not a replacement for ongoing mixed user research methods!),
  • you need a user research vector to deliver a web survey,
    • we daisy-chain Google Tag Manager, Ethnio and Smart Survey,
    • you can also run the SUPR-Q in a lab setting,
  • cost of $3,000 to $5,000 per year (although a not-for-profit discount is available) is an ongoing cost which could easily consume a significant part of your UX budget,
  • you’ll need to plan and schedule running your quarterly SUPR-Q so that you can integrate your data with the quarterly release. This can conflict with other user research operations if not carefully scheduled.


SUPR-Q is a useful tool to generate reliable scores for benchmarking websites comparative user experiences. The normalised scores are useful for understanding how well a website scores relative to others in the database. In addition to providing a method for tracking and reporting the user experience for a service across reporting periods. 

However, it can be fiddly to administer, can exclude certain industries/regions and its cost will preclude its use by smaller teams and organisations.  

Further reading

UXPA Journal of Usability Studies


UIE / Jared Spool

By Rik Williams

I write about how to collaborate to design simple, usable and inclusive information experiences that make the lives of customers easier. Read more in Categories and Tags.

Leave a Reply

Your email address will not be published. Required fields are marked *