Chapter 19

Creating a User-Centered Corporate Culture

Throughout this book we have argued that observing and engaging with your users are essential to making a popular, compelling, and profitable product or service. But let’s face it: User research is just one of many, many inputs that shape what an organization makes and does. How much does it currently matter to your company? That’s a question you should try to answer honestly. We have tried to show how to do the best possible research and how to deliver it most effectively; the remaining challenge is to make your organization into one that values high-quality research, and where critical decisions are made based on what that research reveals. This goes beyond what you, or even your team, can do all on your own. It’s a matter of corporate culture.

A user-centered development process often demands that developers shift perspectives and spend time walking in their users’ shoes. This fundamental shift takes time and happens differently in different organizations. The International Standards Organization has defined several levels of organizational “usability maturity”: At the lowest level, the company does not recognize it might have any problem with usability, while at the highest level, user-centered design is “embedded in…development strategy to ensure usability.” Most companies, of course, fall somewhere in between.

With examples of companies succeeding through design and user focus in front of them, many organizations delcare that they value user-centered design and user research. A CEO might express this in terms of a core company value, or point to another company to emulate, such as the LEGO Group or Zappos. But organizations still find it difficult in practice to prioritize research when they need to ship a product on schedule or to spend money on user experience research that could be used to advertise or market the product. Rather than make the case anew for each project, we recommend that you focus on changing the development process to make user research and design an expected step, and give the people who do research and design a clear place at the table in decision making.

Work with the Current Process

When you begin each research project, you ideally go through a planning and definition process, as we describe in Chapter 4, when you get some context on the product or service your research is about. Why is it being developed? What does it do? You meet the stakeholders and sponsors who care about it and understand their priorities and success criteria for it.

If you continue to work with the same organization across projects (whether as a consultant or in-house), you have the opportunity to learn these things about the organization itself. When you take time to do this “internal discovery,” you learn how employees want development to work, how it actually works, and how you can help it work better.

You also get better at identifying key stakeholders for any given research project: those who are clearly interested, who should be interested, those who oppose it, and those who are likely to help it succeed. They may be people who are already on your team, or they may be in adjacent departments. They may be top executives, opinionated engineers, or the salespeople who must represent your product to customers.

Even if a high-up executive wants the whole company to be user-centered tomorrow, speedy transitions are almost impossible. These practices stabilize in one project and one team at a time. In the same way, taking methods onboard wholesale almost never works either, because organizations have different expectations, resources, and relationships. Instead, making user research work for your company means trying something, observing the impact, reflecting on how it could work better next time, and doing it again.

Start Small and Scale Up

Starting small often works best. If they fail, small projects are not seen as severe setbacks. When they succeed, they can be used as leverage for larger projects. Moreover, they educate everyone in the methods, goals, and expectations of user experience research and user-centered design.

How small is small? Initial targets should be short-term, doable, and straightforward. The ideal project revises or adds a feature with few dependencies, so that changes are achievable in a short time frame and the impact is easy to observe. For example, if you were working on a website, a good first project might be to investigate the usability of an underperforming registration experience and identify changes that might increase the number or rate of sign-ups. If there isn’t a small project readily available, you might choose a project on which you can start early, integrating user research from day one into an initiative that is separate enough from the rest of the company’s efforts that you have license to do things a different way. Finally, you might start at the grassroots level, getting one small team to adopt user-centered processes in a way that can serve as an example for others.

John Shiple, former director of user experience for BigStep.com, suggests building user experience research into website home page redesigns. Home pages and landing pages are redesigned on a regular basis and they serve as entryways and billboards for the company. They are important parts of the user experience, but their design often affects presentation more than functionality, so unsuccessful designs can be rolled back easily. This gives their redesign scope and importance but without the weight of core functionality redesign.

Prepare for Failure

Prepare to stumble as you start bringing user-focused techniques in-house. You may do the wrong research at the wrong time with the wrong part of the product. You may ask all the wrong questions in all the wrong ways. You may be asked to do something that’s inappropriate, but you have to do it anyway. The research results may be useless to the development staff, or they may completely ignore it. Management may ask for hard numbers that are impossible to produce.

This is one of the reasons to start small. This is also an occasion to be philosophical. Bad research happens to everyone, but every piece of user research, no matter how flawed, provides insight, even if it’s only into how to structure the research better next time. Recognizing failure greatly reduces the likelihood of such failure in the future. Set appropriate expectations for yourself and for those who may be receiving the results, use problems to make further plans, get feedback from your interview subjects, and examine how well the research is going even as it’s still underway.

Involve Stakeholders

A heuristic evaluation—or, thoughtful criticism—often does not convince partners that such and such sucks. A live event—with donuts—behind one-way glass can. That’s sad, but true.

—Dave Hendry, Assistant Professor, University of Washington Information School

Making people observe and participate in research is one of the most effective ways to sell them on its value and effectiveness. Observing users failing to use a product is an incredibly powerful experience. It communicates to the observers that their perspective may be entirely unlike the way their users think.

The development team and key stakeholders need to be involved in research, but in different ways.

The Stakeholders

Those who make decisions about a product need to see the value of research firsthand. The best way to accomplish that is to make them watch it happen. They don’t have to actively participate, but if they can be in the same room (or observing remotely) while people struggle, then listen to the discussions that these struggles inspire in the development team, they are much more likely to support such efforts in the future.

The Development Team

Those who are directly involved need to see the research process and its results. For them, involvement in the research should include direct participation in developing the research goals, creating research prototypes, and analyzing the results. Once they’ve participated, they are much more apt to appreciate the process and to integrate it into their future plans.

Including everyone in every bit of research is often difficult, however, and developers need to be directed toward research that will be most meaningful for them. For example, people from marketing and business development are more likely to be interested in ideas that speak to broad, strategic issues with the product. These are often embodied in focus groups or contextual research studies and less so in usability tests. Technical writers and trainers benefit from knowing which concepts people want explained better, issues that are revealed during task analysis or usability testing interviews. Engineering and interaction design absolutely need to be included in usability testing, but participating in field visits may not be as immediately useful to them (that said, these two groups should probably be in as much research as possible, since they are in the best position to apply bottom-to-top knowledge of the user experience).

Respect your development team. Just as there’s a tendency in developers to write off users who don’t understand their software as “stupid,” there’s a tendency in user experience researchers to write off developers who don’t understand user-centered design processes as “stupid” or at least “clueless.” Take an active part in development, explain processes, and listen for suggestions. If a development team seems clueless at first, it’s surprising how quickly they appear to wise up when engaged as partners in a research project.

Interaction designer Jeff Veen described the reaction he got from one company’s staff to the idea of user research. “To the engineers, usability was a set of handcuffs that I was distributing; to the designers, it was marketing.” Jeff’s solution was to turn research into an event. He made it clear that something special was happening to the company and to the developers: that management was listening to them and really looking carefully at their product. He first arranged for the delivery schedule to be loosened, giving the developers more breathing room and alleviating their apprehension that this process was going to add extra work that they wouldn’t be able to do. Then he invited all of them to watch the usability research in a comfortable space, with all meals included. As they all watched the interviews together, Jeff analyzed the test for them, emphasizing the importance of certain behaviors and putting the others in context. He also encouraged the developers to discuss the product. After a while, they noticed that they were debating functionality in terms of specific users and their statements, rather than first principles or opinions.

Make Research Visible

Don’t wait until the end of a study to share your findings. Report as you go, enlisting your stakeholders to help you communicate if necessary. Create deliverables that are visually and conceptually engaging and that can be shared easily, even among people who did not attend the research and presentations. Of course, if you have involved the team and shared short interim updates, both the findings and your process will be all the more accessible.

When you have research findings, put them in front of people. For example, if you once had ten minutes with a VP, request a similar amount of time after every piece of research on a regular basis, presenting the highlights of the research once a month. Even though she may not be interested in the specifics of each research project, the stream of interesting tidbits of information will be a useful reminder of the process and its value. Schedule lunchtime seminars on user-centered design and invite people in from outside your company to talk about successful projects. Buy some pizza. Having an outside voice discussing situations similar to those your group is encountering can be quite convincing.

You might also look for opportunities to share your work outside the company. Demonstrating your successes (with company permission, of course) can enhance the company’s prestige in their industry and may attract new customers.

Measure Your Impact

Don’t just use metrics in your research. Remember to measure the effectiveness of your research program as a whole. In an ideal world, your products and processes will be so improved that it’s unnecessary to convince anyone that user research and user-centered design are valuable, but in most cases you will need to be able to demonstrate the effectiveness of these practices. Even when everyone notices improvements, organizations need some measurement to determine just how much change has occurred and to set goals for the future. See Chapter 16 for how to identify the product metrics the company currently cares about, and consider looking beyond those, to metrics that speak to the company’s operations.

Pricing Usability

The ideas in this section are heavily influenced by Cost Justifying Usability, edited by Randolph G. Bias and Deborah Mayhew. It is a much more thorough and subtle presentation of the issues involved in pricing user experience changes than can be included here.

Using metrics to calculate return on investment (ROI) is very convincing, but notoriously difficult. Many factors simultaneously affect the financial success of a product, so it’s often impossible to tease out the effects of user experience changes from the accumulated effects of other changes.

That said, a solid case for bottom-line financial benefits is the most convincing argument for implementing and continuing a user-centered design process. On the web this is somewhat easier than for consumer electronics, for example, where many changes may happen simultaneously with the release of a new version of a product. E-commerce sites have it the easiest. Their metrics are quickly convertible to revenue:

• Visitor-to-buyer conversion directly measures how many visitors eventually purchase something (where “eventually” may mean within three months of their first visit, or some such time window).

• Basket size is the size of the average lump purchase.

• Basket abandonment is a measure of how many people started the purchasing process and never completed it. Multiplied by basket size, this produces a measure of lost revenue.

Each of these measures is a reasonably straightforward way of showing that changes to the site have made it more or less profitable.

Other kinds of services, for example, those who sell advertising or disseminate information to employees, require different measures. Since salaried staff answer customer support calls and email, reducing the number of support calls and email can translate to direct savings in terms of staff reduction. However, support is a relatively minor cost. Reducing it generally increases the bottom line insignificantly. What’s important is to find ways of measuring increases in revenue because a product was developed in a user-centered way.

For example, take the redesign of a news website to make it easier to find content. The average clickstream grows from 1.2 pages to 1.5 pages, which represents a 25% increase in page views, which translates to a proportional increase in advertising revenue. Seems pretty cut and dried. But say a marketing campaign launches at the same time. Both the usability and marketing groups can claim that their effort was responsible for the increased revenue. To justify the user experience perspective and separate it from marketing, estimate the impact of usability and its ROI. This creates a formula that can spark a discussion about the relative effects of advertising and usability, but at least the discussion can be on a (relatively) even plane.

Here’s an example of how the argument for the ROI of user experience versus marketing might go.

Recently, our site underwent a redesign that resulted in increased page views and advertising revenue. This came at the same time as a marketing campaign encouraging people to visit the site.

Our site analytics tell us that the average length of a session was 1.2 pages for the 8-week period before the redesign. This means that people were primarily looking at the home page, with roughly 20% of the people looking at two or more pages (very few people looked at more than four).

Usability testing showed that users had a lot of trouble finding content that was not on the home page. One of the goals of the redesign was to enable them to find such content more easily.

For the 4 weeks after the redesign, the average clickstream was 1.5 pages, a 25% increase in per-session pages and pageviews. The marketing campaign certainly contributed to this increase, but how much was due to increased usability of the site? If we suppose that 30% of the increase was due to the greater ease with which people could find content, this implies that 7.5% of the increase in pageviews is a direct result of a more usable site.

Using the average number of monthly pageviews from the past year (1.5 million) and our standard revenue share per click of $0.02, this implies a monthly increase in revenue of $1125. If the marketing effort was responsible for getting people to the site, but the new design was responsible for all the additional clicks, the added accessibility would have been responsible for $3750.00 of the increase in monthly revenue.

However, deep use of the site is different than just visiting the front door and has an additional effect on revenue. The revenue per click for subsections is $0.016. At 30% effectiveness, this would imply an increase of $1800 in revenue per month, $21,600 per year, or roughly 10% of all revenue.

Our costs consisted of 60 hours of personnel time at approximately $75/hour, or $4500 plus $500 in incentive and equipment fees. Thus, the total cost to the company was approximately $5000. When compared to the annual return this represents a 270% to 330% return on investment at the end of the first year.

In some cases the ROI may be purely internal. If good user research streamlines the development process by reducing the need for postlaunch revisions, it could actually be creating considerable savings to the company that does not affect direct revenue. Comparing the cost of revisions or delays in a development cycle that used user-centered techniques to one that did not could yield to a useful measurement of “internal ROI.”

Build Long-Term Value

You need to be stubborn, committed, and shrewd, but know when to back off and when to barge into the senior VP’s office.

—Chauncey Wilson, senior user experience researcher

As you do more research projects, look for ways to leverage and demonstrate the value of accumulated research. As the expert on your research “body of knowledge” and on the users themselves, you can add value by answering questions that come up on a day-to-day basis and by preventing the reintroduction of problems identified in previous research. Sometimes, the response to research will be lukewarm. Reports can sit unread on managers’ desks or be discounted by engineers who see insufficient scientific rigor in them. In these situations, continuing the research is especially important, showing that results are real, valuable, and consistent.

What If It’s Just Too Difficult?

None of this stuff is easy, but what if the entrenched corporate culture just resists your efforts to integrate user research? What do you do then?

First, identify the nature of the resistance. Resistance comes in two primary forms: momentum and hostility.

People fall into old habits easily. User-centered design takes time, energy, and commitment. Doing things “the old way” is almost always going to be easier and may seem like a more efficient way of doing things. “Well, if we just did this the old way, we could do it in three weeks. You want us to do a month of research first? That’ll just make us miss our deadline even more!” Using near-term efficiency as a pretext for rejecting long-term thinking will lead to more inefficiency over time. Gut-level decisions made in the “old school” fashion are much more likely to ultimately create more work in fixing avoidable mistakes.

The way to counter momentum is to build speed in a different direction. As described above, a slow introduction of user experience research techniques can be effective when coupled with an internal “marketing campaign” documenting the process’s effects on the product and the company. Likewise, an executive commitment to revamping the whole development process can be effective, although it requires more resources and commitment than some companies are willing to expend. Large changes in direction can’t happen overnight, though: They take time and a commitment to incremental advances.

Hostility is more difficult to resolve. In certain cases, developers find users threatening. They call users “lusers,” describe the process of making usable products as making them “idiot proof,” and approach their users’ demands or misunderstandings with derision. Such statements betray an attitude toward the product that’s somewhere between arrogance and insecurity. Unfortunately, when hostility is especially irrational and obstinate there’s little that can be done except to focus your efforts on the more rational stakeholders on the team.

However, direct exposure to users can help convince the doubtful. At first, seeing users fail at something can confirm the doubter’s worst fears: that people are profoundly different and alien. The experience can feel really uncomfortable, but it brings home many of the core ideas. Extended exposure almost always reveals that a products’ users aren’t idiots. They just see the world differently from the developers, and their problems and concerns may be easier to alleviate than the developers think. In some ways an aggressive challenge can allow for a more radical, more rapid change in mindset than a program of slow subterfuge. Sometimes, that challenge doesn’t work. But there’s no way to discover whether the hostility can be resolved without addressing it through a simple, small project.

Avoid research paralysis. When user-centered processes start becoming popular and the development team has bought into them, there’s a tendency to want to make all changes based on user research. This is an admirable ideal, but it’s generally impractical and can cause development to grind to a halt.

Don’t get distracted by research and forget the product. It’s okay to make decisions without first asking people. Just don’t make all your decisions that way.

Following and Leading

As new product development becomes globalized, companies used to leading on technology find that they have an ever-shorter window to establish market presence before their innovative new product or service becomes a low-price commodity. Increasingly, companies like these are turning to user experience as a proven differentiator. There are examples of companies in every sector that have successfully used user experience research and design to create profitable products they would not have developed otherwise. And simply put, the best way to learn what “great” really means is to learn it from users.

One of Google’s first principles is “Follow the users and all else will follow.” But as you incorporate research-driven, user-centered design, something unexpected happens. You begin to know your users so well that you can sometimes anticipate what they might want. By understanding how they act and how they think about the problems you are trying to help them solve, you can get a feel for what they will adopt and what they won’t. Knowing that you can count on users’ feedback, help, and collaboration allows you to experiment with bolder ideas, consider a wider range of solutions, or take risks more confidently than you might otherwise. When user experience is truly part of your corporate DNA, you can be a leading company in every sense.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset