• You Are STILL Not The User

    by  • April 10, 2019 • Customer Experience, Research, User Testing, UX • 0 Comments

     

    This week I am reprinting a classic from the Nielsen/Norman group about the dangers of using internal consensus as a stand-in for actual research and testing. This is a classic mistake, usually committed by product people who have backgrounds in sales and marketing instead of research and/or engineering. But it is not limited to them: even experts in user-centered design can fall prey to this effect. It’s important, then, to make sure that assumptions about a product’s use and functionality be continually validated by testing and observation.

     

    Post originally written by NN/g researcher Raluca Budiu on October 22, 2017
    By now you probably have heard the phrase “you are not the user”  — it’s become one of the mantras of user experience, and rightly so. All our work as UX professionals stems from the assumption that we are different from our users. Artifacts that are right for us are not necessarily right for our users: we can’t judge user-interface quality based on whether we like a design ourselves. We need to learn how to create systems that are right for those who will actually use them.Assuming that you are your user is a fallacy that is ingrained in the human mind. It even has a name in social psychology — it’s called the false-consensus effect.

    The False-Consensus Effect

    Definition: The false-consensus effect refers to people’s tendency to assume that others share their beliefs and will behave similarly in a given context. Only people who are very different from them would make different choices.

    The false-consensus effect was first defined in 1977 by Ross, Greene, and House. They showed that unlike scientists, “layperson psychologists” (that is, all of us who are put into the position to guess how others would behave) tend to overestimate how many people share their choices, values, and judgments, and perceive alternate responses as rare, deviant, and more revealing of the responders.

    Ross and his colleagues ran a series of experiments in which participants had to estimate what percentage of people would make one of two choices: for example, what percentage of people would choose to contest a speeding ticket in court versus just pay the fine. After they made their estimate, participants disclosed what they would do in that situation, and also filled in two questionnaires about the personality traits of those who would make each of the two choices. The researchers discovered that participants expected (1) that the majority of people would make the same choice that they made (e.g., pay the ticket), and (2) that those opting for the alternative would have different, more extreme personality traits than those opting for their choice.

    We tend to assume that our next-door neighbor has voted for the same candidate that we did in the last presidential election. Only someone who is very different from us — living in a completely different part of the country, from a different socioeconomic class, with a different education — could have voted for the other candidate. Or so we think.

    These assumptions are natural. The human mind makes inferences based on one or few examples: if our ancestors were attacked by a wild beast, it would make sense to assume that the beast was dangerous and stay away from it even in the absence of other examples.

    Generalizing based on the examples available is a called the availability bias and is a type  of cognitive bias. (Others include negativity biasloss aversionnarrative bias, and framing — which is a type of priming.)  It is often a source of stereotypes and overgeneralizations. As my yoga teacher put it, with those Eastern European roots, I should have no problem with back bends — as if all Romanians were Nadia Comăneci.

    Why You MUST Test

    Much in the same way, we, designers, developers, and UX researchers assume that people who will use our interfaces are like us. We have one example of someone using the interface: it’s us. And maybe our colleagues. And we make generalizations based on that example. So only someone who’s stupid or very different than us could actually fail to figure it out.

    Wrong. We are wrong believing that, but it’s important to understand that we are no worse human beings for doing so. It’s deeply weaved into our nature to believe that others are like us.

    So what’s a fallible human being to do? And how about a fallible designer or software developer? The answer is simple. Learn about this bias. Acknowledge it. And then do something to overcome it. When it comes to user interfaces, the answer is simpler than in other avenues of life: Test. With real users (not your colleagues).  Know who your users are and how they respond to your designs by watching them use these designs. Don’t make assumptions.

    UX researchers are also subjected to the false-consensus effect and the availability bias (and to many others ). Much of our qualitative work involves looking at a few users and one design, and then making inferences to other similar, but not quite-the-same situations. Or applying heuristics and the knowledge that we’ve acquired to new paradigms. It’s important to understand that these inferences can be biased — we may be wearing blinds. Often what works in one situation may not work in others, and vice versa.

    Acknowledge your vulnerability and establish checks. Don’t validate; instead investigate. Study with your actual target users whenever the slightest doubt is involved.

    About

    The UX voice crying in the wilderness, but glad that it's getting better all the time.

    http://grapnel.net/carroll

    Leave a Reply