Metrics for Measuring UX: Lies, Damn Lies, or Statistics You Can Use?
There are three kinds of lies, said someone (Mark Twain? Benjamin Disraeli?): lies, damn lies, and statistics.
Whoever came up with this bromide obviously hasn’t lived through the past five years because now we know there are hundreds of ways to lie. And thanks to the revolution of Big Data, statistical findings are everywhere you look, and if you don’t like what a survey or study says, then you just trot out the “damn lies” line to impugn the entire field of statistical analysis.
Which is our way of wondering about how meaningful the metrics behind measuring user experience really are. Can you actually quantify the end result of a design process? But to arrive at that determination, we must agree on what UX even is and then try to assess how to evaluate it numerically.
Here’s a pop quiz: a customer on a website tries to buy a product. Is that part of UX?
Herein lies the foggy confusion surrounding metrics and UX. If you use the definition of UX given by the Nielsen Norman Group, then UX is the sum total of all interactions an end-user has with a company, its services, and its products. In that case, the above scenario would fall within the purview of UX.
But you could also argue, no, the process of buying something from a website belongs more properly to the realm of user interface (UI), or perhaps it describes customer experience (CX), which technically lies outside the lines of UX. What UX does provide is a way of understanding whether that customer will come back, whether that customer was satisfied, and whether that customer will remain loyal to the brand.
Conversion Rates and UX
In the world of analytics provided by Google and others, it is very enticing to think that boosting conversion rates (buying something in a digital format) is pretty much the same thing as boosting UX. The former can be endlessly measured, while the latter remains a very numinous concept, as UX pulls from marketing, research, design, and coding–but especially design.
Because numbers can be converted into spreadsheets and morphed into graphs and tables, digits are easier to digest, especially if you can “prove” that because conversion rates increased, so did UX. It’s entirely possible that is the case, but let’s not oversimplify the deeper meaning of UX and remain somewhat skeptical of how metrics can turn into magical thinking as they attempt to quantify what users are feeling or thinking as they interact with an app or website.
A logical fallacy might be implicated here: “post hoc ergo propter hoc,” which translates from the Latin as “after this therefore because of this.” It could explain why some insist that there is a strong correlation between good UX and good conversion rates. One follows the other, and so one caused the other.
But think about it. What else might cause improved conversion rates that have nothing to do with the quality of UX? You can also throw in another important metric, Average Order Value (AOV), which is how much customers spend per order. While conversion rates and AOV are hugely important business considerations, they can be improved in many ways that have nothing to do with design elements.
In other words, money talks. If your price point and value are better than your competitors, if your marketing team comes up with a brilliant campaign, you will see CR and AOV jump up, and these metrics have little to do with design features that undergird UX.
But poor UX can doom your business, it should go without saying. Just because CR isn’t always an accurate measurement of the quality of UX does not mean that UX and CR have no correlation. It’s just not as dependably explanatory as many believe.
The clock is ticking…one left in stock…sneak into the basket.
The list of dark patterns is a long one, and designers aren’t always aware that they’re being deceptive. But dark patterns can boost CR and AOV, which companies desire while worsening UX. Basically, a short-term win through UX will suffer in the long run.
This again shows why CR and AOV don’t necessarily reveal the true merits of a well-designed UX. Designers want to create digital spaces that are intuitive and allow users to accomplish a well-defined goal. If that’s subscribing to a newsletter, then users should understand precisely how to do that.
And good UX will let these same users unsubscribe in just as easy a fashion. Building a “roach motel,” where check-in is simple but check-out is hard, isn’t a hallmark of excellent UX. Neither is tricking users into buying something they don’t want.
At the same time, you can make UX better and still see CR drop. An example might be giving users more information about a product that kills a sale, whereas not providing the same information would’ve led to a conversion.
What users do in a digital space matters a great deal to UX, but do the metrics related to tasks really capture the quality of UX?
Task success measures the percentages of users that did some discrete action, such as filling out a form or making a purchase. No one can argue that measuring things like error rate, how often the Back button is hit, and how often users fail to finish a task isn’t important. These metrics can all be reduced to a numerical reality, a series of numbers that purport to convey UX. But it’s easy to imagine a user feeling positive from failure–for example, having trouble completing a purchase but then getting help and then going back to conclude the task.
Likewise, time on task is a metric whose values remain confusing. Is it good if users spend more time finishing a task or less time? There are sound arguments for each. If a user is in a hurry, then spending more time probably means a bad experience, whereas a shut-in who shops for human connection might find spending lots of time on tasks to be very serene.
Tell Your Statistics to Shut Up
So said Charlie Brown to Linus in the famed Peanuts cartoon. Charlie Brown didn’t want to hear the truth, and nothing can deliver a blow like some well-presented data. But these numbers are just that, numbers, and they need context in order to make the most sense. Many metrics can point to usability issues that absolutely must be addressed. Don’t throw out the UX baby with the bathwater based on data that might tell a different story.