Resources

Best Practices for Asking the Right Usability-Testing Questions

hey reliable - Brian Checkovich by Brian Checkovich
May 12, 2023
Best Practices for Asking the Right Usability-Testing Questions

Objection!

One of the most iconic moments in a courtroom drama is when an attorney stands up and shouts to the judge, “Objection! Leading the witness.”

Leading, as in asking a leading question, is the kind that takes a witness by the hand and guides them to the intended answer. When it comes to usability testing, there can be a tendency on the part of the tester to elicit responses that the phrasing of the question suggests, as in “How did you find that feature to be helpful?”

Usability testing only works if you ask the right questions. If you spend the time and the money to conduct research on what users think of your website, you don’t want to spoil the effort by amassing biased or invalid data. Here’s how to sharpen your line of questioning.

It Depends

First, determine the objective of the usability test because this testing scenario will dictate the nature of the questions you’ll need to ask. Essentially, there are three main categories of questions: those related to screening potential testers, those used for in-test measurements, and those designed for post-test debriefing. Screening questions will ascertain how good a fit a testing candidate is for your product. In-test questions are rolled out while a user is testing your product. Post-test questions will follow the test and seek further clarification.

Screening

Before you assemble a group of users to test your product, you’ll first need to study your user profile generated by your site’s analytics. In short, the testers you pick need to reflect the demographics and life patterns of your actual or desired users. You might have an ideal user in mind, and if so, you need to explicitly formulate the criteria that you will seek out.

The demographics include age, income level, and educational background. In effect, these are independent variables that have great importance for a reliable usability test. You don’t want too many youngsters evaluating a product designed for boomers. Likewise, if your product is a paid service, users who are low-income probably won’t shell out money for it–they won’t be actual users.

Highly-educated users might perceive elements on your website through a more critical lens. Because of their training, they might intuit how features operate far differently than peers with less schooling, who might get frustrated or misconstrue the same feature. The highly educated might give you a false sense of security that your product’s design works when in fact it’s confusing the average person.

Behavioral information can be gleaned by asking questions that pertain to daily habits, time spent online, the devices used the most, and whether they’d interacted with the product before. If a tester is a regular online shopper who’s expert in e-commerce, any negative feedback from this person would be more valuable than a positive review from someone who doesn’t spend much time online.

In-Test

When a tester is actually using your product, you want to unearth as many reasons why they did something as you can. You want to understand preferences and choices, which require the tester to self-reflect.

The key for in-test reliability is to frame the questions as open-ended. A perfect example might be: “What did you think of this design?” Again, this structure avoids leading the user and allows the user to answer freely. If you ask, “Is this design better than the old one?” you are implying the new one is better and also committing the sin of a yes/no question.

When you notice that a user can’t complete a task, like signing up for a newsletter or adding an item to their cart, you will want to ask why. Nothing more. No value judgments about their cognitive ability, no assumptions about their sobriety. Just allow users to describe in their own words their authentic experience with your product.

Post-Test

When a user finishes using your product, now is the time to obtain more granular data. The best tool for generating qualitative responses that have a quantitative basis is by using the Likert scale. Most people have seen a version of a question that begins: “On a scale of 1 to 5, with 1 being the best…” The value of the Likert scale is that it can measure attitudes, perceptions, and values by assigning number values that statistical analyses can probe for deeper patterns.

So, for example, if you want to know if using a feature on your website was difficult or easy, instead of an open-ended question that might have hard-to-interpret responses or a yes/no approach that has no nuance, a Likert scale can provide context. Open-ended questions are always useful, but a Likert scale can allow for a broader understanding of where users are, on average.

And try to use follow-up questions during the post-test because the clarification is the goal. A friendly exchange of ideas is the best method for getting honest feedback. Don’t be afraid to hear the truth. Design can’t be improved without users dishing on what they think.

User experience won’t improve unless you ask the right questions.

Questions or comments about this post? We're here for you at info@heyreliable.com!
Share

YOU MAY ALSO ENJOY

Alice Doesn’t “Click Here” Anymore: Forget the Generic and Get Descriptive with Your Links
Resources Alice Doesn’t “Click Here” Anymore: Forget the Generic and Get Descriptive with Your Links Click here? Not anymore. Those days are long gone, when users would click
Offboard the Right Way to Lessen the Burn of Churn
Resources Offboard the Right Way to Lessen the Burn of Churn Today's Good-Bye Can Become Tomorrow's Hello: Offboard the Right Way to Lessen the
Google Analytics 4: A Guide for 2022
Resources Google Analytics 4: A Guide for 2022 Analyze This: Google is Cooking a New Dessert When the history of Big
Send a quick email