Chapter 8

More Than Words
Object-Based Techniques

The stuff that dreams are made of is often difficult to express in words but may be imaginable as pictures in your head.

Elizabeth Sanders, Generative Tools for CoDesigning

Talking and listening underpin most user research techniques. However, there’s a lot that people know and feel about the world that they have difficulty expressing in words. Maybe they have difficulty remembering the concrete details that will bring a vague memory of a vacation to life. Maybe they can’t quite articulate what would improve their experiences with a frustrating wait for medical attention in a hospital emergency center. Maybe they aren’t quite sure how to explain something as psychologically complicated as their relationship to saving money. Or maybe you just want to know how other people might draw connections between different parts of a website that currently are not linked.

That’s what these techniques are for. They supplement what you can learn by interviewing and observing people by adding objects that participants can use as props to think with and through. This chapter will cover three main types of object-based techniques:

1. Photo elicitation is a dialogic technique that prompts conversation between researchers and participants.

2. Collage and mapping are generative techniques that invite participants to externally represent internal thoughts and feelings.

3. Card sorting is an associative technique that asks participants to group objects together in order to surface how they order and make sense of the world.

When to Use Them

Because they do not typically give targeted answers to precise questions, generative and dialogic techniques are most useful in formative, exploratory research. They help you understand how people come to think, feel, and know about their lives. They are not good for testing hypotheses or answering specific tactical questions about product direction. What they are good for is early-stage, “fuzzy front-end” development, when the goal is to open up a space for exploration rather than to justify selecting one direction over another. Design scholars refer to this widening of perspective as divergent thinking, as opposed to the convergent activity of narrowing one’s options. As Pieter Jan Stappers and his colleagues at the Delft University of Technology write “The researcher uses these methods not to answer precisely framed questions, but in order to generate the questions themselves, in directions he or she does not control: in order to find the blind spots.”

Why would you want to find your blind spots? Well, if you’re trying to expand your current market, develop a totally new product or service, or even substantially update what already exists, it’s important to question your assumptions. That’s how you avoid mistakes and come up with unexpected breakthroughs. These exploratory techniques also produce lots of rich, inspirational data about people’s aspirations, values, and aesthetics—the kind of information that designers can use to generate new concepts and shape decisions about form and function. They can also help researchers map more intangible characteristics or qualities of an organization such as relationships between stakeholder, information flow between elements or stakeholders in a system, and competencies within a value chain.

The associative technique of card sorting, as we’ll discuss later in this chapter, is a little different. It is most often used for tactical, convergent decision making about products that are already in the process of active development. Because researchers need to put together a collection of words, phrases, or images to be sorted, it’s more effective at organizing information you already have, rather than opening up new areas for investigation.

All of these techniques typically take an hour or less and usually happen in conjunction with interviews or focus groups. For this chapter, we are assuming that you have already chosen a primary research activity.

Dialogic Techniques

One of the simplest ways to stimulate discussion is to show people things, from images to performances, and have them respond. With a concrete example in front of them, people can immediately comment rather than struggle to imagine an example and communicate it to you. Examples give both the researcher and the participant a shared reference point. One of the most commonly used dialogic techniques for user research is photo elicitation.

Photo Elicitation

People often say that a picture is worth a thousand words. In photo elicitation, that’s not quite true. The goal of photo elicitation is not to substitute images for words, but to use pictures to stimulate vivid, concrete, meaningful words. In photo elicitation, participants respond to a set of images that researchers show them. The idea, based on decades of qualitative social science, is that photographs don’t speak for themselves; they require active interpretation by the viewer. As viewers discuss images, you can begin to understand what they see in them, and apply those interpretations to your own project.

Let’s return to the example of the hospital emergency center project. You are trying to get people to recall an experience that was probably physically and emotionally difficult—one that they may have tried to forget but that may still provoke intense reactions. Some critical details may have been forgotten, while others might remain so vivid as to obscure other important points.

This would be a good opportunity for a diary study (see Chapter 10), to capture the experience as it’s happening—except that people obviously can’t plan when they will have a medical emergency. So it would be very hard to recruit them ahead of time. Moreover, asking participants take photographs of other sick people, or of clinicians who have not given their permission, creates a host of legal and ethical problems. In that sort of situation, photo elicitation comes to the rescue.

How to Do It

Assembling the Images

First, you will need to collect a set of images that you think will help you answer your research questions (see Chapter 4 for a discussion of how to formulate research questions). The images used in elicitation are either user-generated or researcher-assembled. Initial research, such as stakeholder and pilot user interviews (Chapter 6) or competitive analysis (Chapter 5), can produce a list of standard images to stimulate discussion that you assemble yourself. To get user-generated photographs, you will need to contact participants ahead of the interview. They will produce the photographs either through the course of usual activities or in pre-interview exercises initiated by the research team. For example, if you are designing a new camera for documentary photography, you might want to ask photographers for examples of their own existing work. However, if you are interested in learning more about how busy working parents feel about their cars, you’ll probably want to ask the car owners to take some specific photographs of car seats, trunk storage, and garaging ahead of time.

Participant-generated photograph elicitation activities resemble simple diary studies or probes. Chapter 10 discusses how to set up a self-documentation exercise with participants.

In the case of the emergency center, you could assemble your own collection of hospital images by downloading photographs from stock photography websites. You might also search for photographs of emergency centers on websites where people share personal photography. The latter can be especially useful to prompt discussion, as stock images are often too polished to include the messy details that can really prompt recollection. Second, with the permission of hospital staff, you might also take your own photographs of the emergency center, documenting those people, places, and objects you know you will want to discuss with participants. Third, depending on your research aims, you might collect some hospital informational and marketing materials to see how they match the experience of your participants.

Make sure you have the legal right to use any photographs you download for research purposes.

Once you’ve got about 20 or 30 images, it’s time to decide which ones you will present, and in which order. Generally, you will want about six to nine images, allotting about five minutes of discussion per image. At this point, you will probably have many more images than you could possibly address in an hour. It’s time to cut.

One good way to start creating a manageable collection is to write down all the research questions on separate pieces of paper. Then print out all the possible images. Place each photo next to the research question it supports. If a photo supports more than one research question, just print out another copy. Then sort the photos under each question in order of relevance. Make sure that each research question has at least one image that seems to directly address it. For example, research on hospital emergency centers might be concerned with both what people do to pass the time while they wait and how they feel about waiting. A photo of an angry-looking person pacing next to a person sleeping on a chair might allow you to probe both questions at once.

Just like interview questions, image order is sensitizing: it will direct your participants’ attention to certain subjects and away from others. If you are interested in an activity or event (e.g., fixing a car, visiting the hospital) you may want to follow the chronological order of that activity in order to help people recall it. Alternatively, if you are interested in a state or activity with no specific chronological order (e.g., attitudes toward water conservation or photograph editing tools), you could move from more general, atmospheric photos to more specific, concrete ones in order to query participants on more general attitudes before digging into the details of behavior.

Writing the Script

With the images assembled, it’s time to write the script. Sometimes called a “protocol,” sometimes a “discussion guide,” the script is really just that—a list of instructions for the moderator to follow so that the interviews are consistent and everything gets done. For the purposes of keeping this script simple, we’re assuming that you’ve already introduced yourself to the participant, gained informed consent, and proceeded with some introductory questions.

Introduction (3–5 minutes)

The introduction is a way to break the ice and give the participant some context.

[Don’t start showing images until after you’ve finished the introduction.]

What I’d like to do now is show you some photographs. These are photographs that you took for us, remember? When I show you the photograph, I’d like you to speak your reactions out loud. I’m going to ask you some follow-up questions as well. If there’s anything in the photograph that you do not want to discuss, please feel free to ask me to move on to the next question.

In the introduction, you want to make sure that you remind participants of how you got the photographs (they gave them to you!) and that they don’t have to answer any questions they don’t want to. People don’t always realize what’s in the background of their photographs, and you don’t want to shut down your interview by accidentally asking a series of questions about objects or people your participant finds embarrassing or disturbing. This establishes a comfort level about the process and their role in it.

For researcher-generated photographs, you will use much the same language, but make sure that you explain whether the photographs have been customized to them in any way. For example:

Now, I’d like to show you some photographs of hospital emergency centers. Some of them will be from a hospital near where you live. But others may be from hospitals that are far away, and so they may not look familiar to you. They are just here to help us have a conversation about your experience with emergency centers.

Give a basic explanation of the origin of any researcher-assembled images early on so that participants don’t interrupt the interview with questions about where the photographs came from or why you chose them.

Then you’ll move on to prompting discussion with the photographs. This is the bulk of the activity and, depending on your schedule and the talkativeness of your participants, can take up to five minutes per photograph. You don’t want to make people look at too many photographs or they’ll get bored. You will also need to take into consideration what other activities you have planned for the interview or focus group.

Elicitation (30–45 minutes)

Showing people objects during the initial introduction can be distracting, as is showing all of your collected objects at once. Instead, proceed one by one, asking (at least initially) identical questions about each photograph. A standard set of questions might be:

Can you tell me more about what was happening when you took this photograph?

Why did you choose this place (or object, or person)?

People interpret photographs as objects differently. For many people, photographs exist as a straightforward documentation of reality. They may not think about whether the photograph is specially cropped or digitally altered. Experts, however, may pay as much attention to the construction of photographs as to their subject matter.

That means you may want to ask specific questions about how the photographs were made, if that seems appropriate:

Have you altered the photo in any way with software? Can you tell me about what you did?

For researcher-generated photographs:

What is the first thing that comes to mind when you see these images?

What are some words you would use to describe how you would feel if you were part of this scene?

When you’re done with the photographs, you’ll conclude and move on to the next section of your interview protocol, if there is one.

Conclusion (2–7 minutes)

Thanks for looking through these with me. Before we move on, do you want to return to any of the photos and say anything more?

Offer participants a chance to go back and discuss any of the photographs, just in case there’s something they didn’t get the chance to say.

Conducting an Elicitation

The format of the images you choose to show will depend on how much control you have over the interview environment. It may seem more convenient to simply bring a computer along to show your images. However, printing them on smooth paper and attaching them to sturdy cardboard allows participants to really get close to them: to handle them, sort them, and stack them in ways that aid their storytelling and help you understand how they are making associations between them. Participant-led image elicitation may require a large number of images that cannot be so easily printed out; in that case, make sure that you have software that can show the images sequentially (as in a slide show), give an overview of the whole collection, or search for a specific image.

Unless you have completely reliable Internet access, do not depend on image-sharing websites or other online tools for image elicitation. When in doubt, test your connection first to make sure that you have the correct passwords, firewall access, and download speeds.

If you are using printed photos, you may want to make sure you have a flat area on which you can spread them out. If you are using a computer, make sure you have power.

Other Types of Dialogic Research

You don’t have to show your participants only photographs. Dialogic exercises can use any materials related to the project that could elicit an emotional response (Figure 8.1).

image

Figure 8.1 Lextant, a design research and user experience design firm, uses “multi-sensory stimuli” to elicit reactions from research participants. These stimuli can include working portable consumer products, such as mobile phones, as well as material samples to show texture, finish, and even smell. Here, a research participant groups and labels a collection of multi-sensory stimuli.

Image courtesy of Lextant.

Elizabeth studied relationships to shopping and advertising in families by bringing a collection of advertisements to interviews—from newspaper coupons to print advertisements in glossy magazines. She also asked participants to show her their favorite websites and asked them to discuss the ads featured there. She spent half of the interview on researcher-generated prompts (the print advertisements) and the other half on the online advertisements. A common set of prompts between interviews allowed some comparison of responses, while visits to personally chosen websites stimulated more in-depth discussion of habits and attitudes.

You can also show participants videos, or even have them react to scenes performed live in front of them by trained actors. Whatever you show participants, you will likely face some similar questions in writing a research plan. Table 8.1 summarizes the many options for elicitation exercises.

Table 8.1. Image Elicitation Options

Image

Generative Techniques: Making Things

Generative techniques allow participants to externalize emotions and thoughts by creating objects that express them. In discussing the objects with participants as they make them and then analyzing them later on their own, researchers learn more about desires, sensations, and aspirations that are often hard to explain.

Uday Dandavate, of the design research firm SonicRim, often explains generative techniques as ways to access people’s schemas. Schemas are mental frameworks that organize our experience. While individual schemas can change over time, in the moment they shape people’s assumptions about how the world can and should work. Developed by psychologist Frederic Bartlett in the early 20th century, the concept of schemas is now used widely in psychology, cognitive science—and user experience research. Through generative activities, Dandavate says, participants help researchers “gain access to their preconceived abstract mental structures that form the basis of their understanding” of a service or product.

Generative techniques either deploy a toolkit of basic elements provided by the researcher or guide the participant in an open-ended process of making something completely new. This section covers the activities of collage, which is typically toolkit-based, and mapping, which is usually more open-ended.

The most practically helpful guide to using generative techniques we have found is Contextmapping: Experiences from Practice, available online from the Delft Technical University.

Collage

In collage, individuals or groups of people make a new composition out of a pre-existing set of elements. Often called “mood boards” by product designers, collages are useful for just that—expressing attitudes, desires, or emotions. They are easy and fun to make but can deeply inform future design through collaborative interpretation by researchers and participants.

The goal of collaging is not so much to make a coherent and consistent statement whose meaning is obviously clear. Instead, the goal is to help participants express themselves, first through the making of the collage, and next through conversation about the collage. Dr. Gerald Zaltman, of the Harvard Business school, calls this “metaphor elicitation”: the structured use of significant images to invoke personal associations and analogues.

How to Do It

Assembling Components
Project-Based Toolkits

You can put almost anything that’s printed on paper into a collage—not just photographs, but also shapes (e.g., squares, circles), icons (e.g., arrows, smiley face symbols), and words (e.g., “boring,” “escape”). Many sites charge very little to license stock photographs for limited use. Some best practices (adapted from advice given by the Delft Technical University) for picking images to produce rich and stimulating collages include:

• Use preliminary research (e.g. competitive analysis, pilot interviews, books and articles about the subject domain) to help you pick components. Look for words that show up frequently or that seem to have contradictory meanings.

• Vary the image subjects (e.g., plants, animals, people, and things). Also vary the human environments portrayed (e.g., different areas of the home, different kinds of workplaces, and exotic or unfamiliar landscapes). Include images of people of both sexes, as well as different ages and ethnicities.

• Balance positive and negative emotionally tinged images (e.g., a laughing baby and a frowning adult), as well as realistic and abstract images.

• Avoid any consistent style or mood. That is, don’t work to collect images that will look aesthetically pleasing together. The point is to give participants varied ingredients with which to articulate something that’s hard for them to state outright, not to create professional-looking outputs.

• Include only a few images that literally show the research topic (e.g., if you’re studying hospital emergency rooms don’t have too many pictures of doctors, stethoscopes, pills, etc.). You may want those images as a base, but too many of them will constrain the discussion.

Then add some generic components, such as icons and geometric shapes. Most of these generic components can be reused from project to project. Many icons and shapes are available as free files online.

Typical toolkits include about 100 photographs and 100 words. You want to give participants a wide choice of shapes, icons, photographs, and words, but you don’t want to overwhelm them. The images in Figure 8.2 demonstrate the range of possible choices.

image

Figure 8.2 Sample pages from image collection for collage exercise. Pages were printed out on sticky-back paper.

Image courtesy of Adaptive Path.

Participant-Chosen Images

You can also ask participants ahead of time to bring their own photos. In that case, contact your participants a week or more ahead of the planned collage date and ask them to spend two to five hours finding images that represent their feelings about the topic of the interview. If the photographs are in a digital format, have them email the photographs to you in advance so that you can print them out and/or work with them digitally.

Participant-chosen images make for a highly variable result. The advantage is that participants may tell more personally meaningful and detailed stories. The disadvantage is that all the collages will have different components, with no real basis for comparison. If you are interested in, for example, identifying the colors and textures that your target audience associates with “safety” in hospital emergency centers, you may be more interested in what images are chosen from a common set of stimuli rather than the specific details in individual collages.

Preparing the Components

You’ll need enough sets of components for all your participants, and then a few extras just in case. The most efficient way to present those components is to lay out many images on the same page, as in Figure 8.2. You can either print the pages on sticky-back paper or use plain paper and provide glue.

What else do you need? Think elementary school arts and crafts.

• Scissors and glue

• Geometric paper cutouts such as stars, squares, and circles

• Colored markers and pens for annotation and drawing

• Sheets of plain paper (11×17 or larger) as the backing for the collage

Writing the Script

Here’s a brief guide to a collaging session with one person. At this point, we assume that you’ve already introduced yourself and the study and signed the consent forms. If you are leading a group collage exercise, plan time in the beginning for everyone to introduce themselves by first name and tell something about themselves (such as a favorite color) to break the ice. This will help them feel more comfortable about sharing other experiences with strangers.

Introduction (3–5 minutes)

What we’re going to do next is called a collage exercise. We’re really interested in your personal experience with [subject topic here]. We’d like to learn about [subject topic here] through your eyes. Please choose some of the images and words in your kits and arrange them on the big piece of paper in front of you in a way that represents your own experience with [subject topic here]. You can do whatever you want—there’s no right way to do this exercise. If you have any questions, feel free to ask. We’ll take about 20 minutes.

The most important role of the introduction is to emphasize that people are free to interpret the instructions as they like. Second, it should ask for only personal memories and perceptions, not what participants believe is the general opinion.

Collage (20–30 minutes)

During this period, participants work silently. They may occasionally ask you questions. Try not to imply that there is a right or wrong way to choose and place pictures.

Interview and Discussion (20–30 minutes)

I’d like you to tell me/us about your collage and why you chose the images and words you did.

Have participants present their collages as in a show-and-tell exercise. As they talk, ask them to explain why they chose those particular images and words and what they mean to him or her. Why is one image next to another one? Are they related? You may also want to probe whether there is an overall order or logic to the visual layout of the collage.

If you are running a group exercise, the group can then discuss the entire exercise as a group once every person has spoken.

One workshop can include multiple generative exercises. Just make sure that you include time for open group discussion after each one if you are running a group session.

Wrap-up (3–10 minutes)

Thank your participants for helping out, and ask if there’s anything more they want to tell you about the collage or about the experience.

Conducting a Collage Exercise

Just as with the earlier section on photo elicitation, we assume that you’ve already done the research planning and recruiting for this activity. You know what your major research questions are and have people who you think can help you answer them.

Because you need to give people time to make a collage and then discuss it, allot one to two hours for the entire exercise. As with focus groups, it can be helpful to have a moderator and a note taker present to manage group collage exercises. The note taker can also serve as a timekeeper, since keeping a group collage exercise moving on schedule can involve gently moving people from one part of the activity to the next.

Make sure you have a big enough table for all your participants to work comfortably on. Depending on the participants’ comfort with the idea, people can also work on the floor. Before the participants show up, put a stack of components, scissors, glue (if necessary), and pens in front of each chair.

Have a video camera on a tripod behind the moderator to record the discussion. Once again, an external microphone on the table will help you in getting slightly better audio.

Analyzing a Collage Exercise: Avoiding Temptation

You may be tempted to speed up your analysis by interpreting the collages as solely visual objects, without reference to your notes or recorded media from the discussion. You may also be tempted to count the number of times the same image appears in multiple collages and use that as a measure of the “meaning” of the activity. And, indeed, many consumer researchers use both of those tactics.

Our advice: be careful! Yes, it is possible to look for quantitative (e.g., numerical) trends in the data. If most of your participants choose the same image from your toolkit when asked to make a collage about “safety” in hospital rooms, don’t ignore that pattern. But you cannot simply assume that you know what the placement of the photograph means to each participant—or that the same photograph means the same thing to different people. For example, one photo elicitation study conducted by Froukje Sleeswijk Visser and design research firm P5 Consultants found the following explanations (emphasis ours) of the same photo of a swimmer preparing to dive into a pool:

P1: “I always shave myself in the evening. So I dive into my bed, completely fresh and clean.

P4: “I feel very sharp after shaving.”

P3: “I always shave myself before going to work. I work in the swimming pool as a swimming teacher.

So while only looking at photographs may be inspirational for design, it will not necessarily assist in understanding potential users. In the same way, counting the incidence of any given component can also be misleading if you assume that each component always represents a single meaning. Instead, go through your notes or audio and make sure you link verbal explanations and discussions with each individual photo. For example, P1 and P4 seem to have a similar psychological response to shaving: it feels “sharp” and “clean.” P3 approaches the swimmer more literally, as a reminder of when and why he shaves. In the end, you are looking for patterns not just in the photos but in the relationship of multiple people’s words to the photos they choose.

Mapping

A map is just a visual representation of relationships between people, objects, and spaces. Maps have three main uses in user research. First, they help participants add concrete details to what might otherwise be abstract answers about habits and preferences. They can help prompt richer, more interesting stories. Second, it can be easier to visually analyze and compare different maps of the same place—or the same sort of place, like a home or a workplace—than it would be to compare verbal descriptions. Third, and most importantly, maps reflect people’s beliefs about the spaces and objects around them: how they define those spaces, how they categorize them, and what they feel about them. If you are designing context-aware mobile applications, domestic appliances, or interactive environments, understanding how people relate to places is crucial to the success of your product.

Spatial Mapping

What’s your route from home to work? Where’s the nearest place to get some coffee? How would you tell a guest cooking in your kitchen where to find a frying pan, eggs, salt, some butter, and a plate?

Most of us have some practice at reading and drawing simple spatial maps, so it’s one of the easiest mapping techniques to explain and carry out. It also (unlike photo elicitation and collage) requires little advance work to assemble materials.

How to Do It

To begin, all you’ll need is a piece of paper (preferably 11×17 or larger), some pens, and some colored markers. Abstract shapes and graphic icons (as assembled for the photo collage exercise) can be helpful, but they’re not necessary. You’ll also need a flat space large enough for participants to comfortably spread out. Place an audio recorder nearby or use a video camera to document the drawing process.

The exercise takes about as long as the photo collage, and the same basic principles apply. You can do this exercise with individuals or groups, as long as you allot some time for the groups to present their maps to the group and discuss the maps as a group.

It is helpful, if possible, to get a sense of the space before asking the participant to map it, whether you’re interested in a room, home, workplace, or even an entire city. It’s hard to ask good questions when you have no idea whether participants are exaggerating the relative sizes or distances of regions or objects, or whether they are leaving out certain regions or objects altogether. For a neighborhood or city, you might look at maps beforehand. If you are interested in the layout of someone’s home, you might first schedule a set of getting-to-know-you questions to break the ice, and then ask for a tour of the house. After the tour, you can sit down and start mapping.

A typical one-on-one mapping exercise during an interview looks like this:

Introduction to the exercise (3-5 minutes)

Explain the purpose of the mapping exercise. Make sure to explain that you are interested in the participant’s personal experience, not that of the “average” visitor or inhabitant.

Mapping (30-45 minutes)

First, ask participants to sketch a map of the place in question. Don’t ask for precision or accuracy. What’s important is understanding why people represent size and distance in certain ways. So if you notice that a participant’s bedroom looks twice as large as the kitchen (when you know the opposite is true), follow up with a question:

Can you tell me why you drew the bedroom that size?

In following up on absences or strange proportions, avoid implying that the map is badly drawn or somehow “wrong.” You didn’t ask for an architectural blueprint, after all.

Once you have a basic plan view of the space, ask participants to draw in other significant objects. Think of them as landmarks. The identity of those objects depends on what you’re studying. If you’re interested in children’s play, you might have participants indicate where all the toys and other play objects in the house are located. If you’re interested in physical security in the workplace, you might ask the participant to mark locked doors and guard stations. Ask questions as they go to make sure you understand what everything is.

With a base map and landmarks, participants have a solid basis from which to recall activities and attribute meaning to regions on the map. You can ask people to trace the paths of their habitual movements, activities, or routines on the map, step by step. This can involve differently colored arrows, lines, and written annotations (see Figure 8.3 for an example). This kind of question is particularly useful for understanding a physical “journey” through a space or experience, as with someone’s weekly grocery shopping or a visit to the doctor’s office.

image

Figure 8.3 A “cognitive map” of a Brazilian household, created during an Intel research project on domestic life around the world.

Image courtesy of Intel Corporation.

You can also ask people to mark regions and zones that are important to them, such as favorite and least favorite places to do certain tasks, places of play and work, or places where physical access is limited or forbidden. Keep on asking follow-up questions. Make sure to suggest that participants use differently colored markers for each activity or region—that will help you keep track of the different questions later.

Conclusion (5-7 minutes)

In the conclusion, as usual, ask if there’s anything more participants want to add. Is there anything they forgot to draw in? Is there an expected question that you didn’t ask?

Social Mapping

Social network maps have become very popular in the past few years; they’re the node-and-link style visual diagrams that represent people as points and relationships as lines between them. Social network maps are usually generated by software that traces explicit, named relationships on social websites, or which extracts implicit relationships from people’s communications over email, chat, or telephone. They represent human relationships from the viewpoint of communication systems and they can have hundreds or even thousands of nodes and links.

Here, we are talking about the reverse: getting a picture of communication from the viewpoint of humans. These diagrams are handmade by participants and focus on the most consciously meaningful relationships in their lives. They are a way to get people thinking and talking about the tools they use to make and sustain important relationships in their lives. If you are designing any kind of product or service that facilitates social interaction between people, these kinds of maps can help you understand the dynamics you are developing for.

How to Do It

Like spatial mapping, you’ll need a sheet of paper (preferably 11×17 or larger), pens, and colored markers. Get a few packs of Post-it™ notes and small stickers in four or five colors. Make sure the stickers are small enough to fit three or four on a Post-it note and leave some space for writing.

Once again, you’ll also need a flat space. Place an audio recorder nearby or use a video camera to document the drawing process.

A typical one-on-one social mapping exercise might go like this:

Introduction to the exercise (3-5 minutes)

Explain the purpose of the exercise, emphasizing that you’re interested in the participant’s personal experience.

Mapping (30-45 minutes)

First, give the participant a stack of Post-it notes. Ask her to write her name on it and put it anywhere she wants on the paper.

Now, ask her to write down the names of other people in her life, one per Post-it. The wording of this question will affect the results, of course. You should give participants some specific, concrete instructions so they know whom to add. You don’t want to suggest that you’re interested in hearing about a best friend if you’re interested in getting a picture of workplace communication. For example:

• If you’re studying communication, ask about “people you are in contact with once a week or more.” At this point, it may be helpful to suggest that people take out mobile phones or computers (or paper mail!) and check to see whom they’ve contacted lately.

• If you’re studying emotional attachment, ask about “people who you would talk to about a personal success or trouble.” Again, it may be helpful for the participant to refer back to their usual communication tools.

When your participant has gotten going with the names, ask her to stick the Post-its to the paper. Have the participant place the Post-its in proximity to the her name based on, for example, frequency of communication or degree of emotional intimacy. Then ask the participant to group together people who have something in common. As she places Post-its, ask questions like, “How do you know that person?”

As the participant places Post-its on the page, she will probably remember more people to write down and place. That’s fine. Have her rearrange the Post-its until all the groupings make sense. Ask her to name the groups. The goal is not perfect accuracy—it’s just to get a sense of which people are important and how the participant puts them into groups.

Next, have the participant list the main communication tools she uses on one corner of the page. You will then put a different colored sticker next to each tool. To maintain consistency between interviews, it helps if you have already matched each color to a tool you expect to hear about.

Give her the sheets of colored stickers and have her use the list to place stickers on each person to represent the tools they use for communication. As she places the stickers, probe for further information with questions like:

How did you start using [name of tool] to communicate with [name of person]?

Can you give me an example of the last time you were in contact with him/her?

What was the subject of the conversation?

Oh, so you sent him a picture? Where was the picture from?

Does your mother often send you links in email?

In addition to asking for specific examples, a good way to get more concrete detail in answers is to ask the participant to review her major means of communication (for example, text messages, phone call list, email, or social website) and discuss the past few days’ activity.

You could probably keep going with these sorts of questions forever, but 45 minutes is about as much as most people can take (Figure 8.4).

Conclusion (5-7 minutes)

image

Figure 8.4 Results of social mapping exercise.

Image courtesy of Paul Adams, from his 2010 presentation, The Real Life Social Network v2.

In the conclusion, as usual, ask if there’s anything more participants want to add. Is there anything they forgot to draw?

In Conclusion

Sketching relationships of closeness and distance on paper can help your participants discover and explain phenomena that they take for granted. In turn, it can help you make better recommendations—whether you need to know where to install information kiosks in a transit station, or which people your likely users email most.

You don’t just have to map space—you can also map time. That’s what we call a “time line.” Asking people to chronologically list the major activities of their day (also known as a “day in the life” exercise) can give you tremendous insights into habits, routines, and everyday struggles.

Remember, however, that maps don’t speak for themselves. They need interpretation. Part of the map’s value is in how you use it to prompt follow-up questions about behavior and values. Analyzing a map on its own, without notes from the conversation, is difficult and misleading.

Maps also suffer from all the limitations of our own memory and perceptions. Maps reveal perspective. For example, a child’s map of her school will likely not include the maintenance office. That doesn’t mean the maintenance office isn’t important, but it does give you a child’s-eye view of what matters.

Associative Techniques: Card Sorting

Card sorting is a technique that helps uncover how people organize information. It works exactly like it sounds. Participants sort cards with words or phrases in them into groups. How cards get organized—and what labels participants give to each group—can tell you a lot about how participants relate and categorize concepts. That, in turn, can help you create visual and structural relationships that make sense to users. You can then use those relationships to understand the sequence of tasks in an activity, structure databases, organize navigational elements, or name features and interface elements.

For a thorough guide to card sorting, we suggest Card Sorting: Designing Usable Categories, by Donna Spencer.

When to Do Card Sorting

Unlike the other object-based techniques in this chapter, card sorting is best at answering tightly scoped information organization questions. For existing products, it typically serves as a means to solve a clear problem. Maybe there is evidence that users aren’t finding what they want on a website, or perhaps two websites need to be combined into one. Card sorting is most effective when you know what kind of information needs to be organized, but before you have figured out how to do it. At that point, a product’s purpose, audience, and features are established, but there is not yet a fixed information architecture or interface design. However, since it’s fast and easy, you can also use card sorting whenever you need to change an information structure.

There are two kinds of card sorting: open and closed. In open card sorting, participants sort the cards in any way they want. In closed card sorting, participants assign the cards to predefined groups. Open card sorts are more generally useful as a user research technique, because they produce richer information about user-made categories. However, closed card sorts can be tactically useful in adding to an existing information structure or in answering minor questions about an information structure that you know is working well.

How to Do It

Recruiting

Like the other techniques in this chapter, card sorting is suitable for both individuals and groups. Group card-sorting activities can prompt valuable discussion and debate about what cards “go together,” but will also require additional attention to coordination and moderation to make sure that the views of one or two participants don’t dominate the sort.

Card sorting is often a quiet individual activity. You can schedule several people simultaneously if you can give the participants enough room to work alone without feeling crowded and if you make sure that moderators circulate to make sure that participants have their questions quickly answered. If you have only one person as a monitor, stagger the schedules about every 15 minutes so that the monitor has time to give each participant an introduction to the technique. An hour is more than sufficient for most card-sorting studies.

Preparing Cards

The core of the card-sorting technique is, not surprisingly, the cards themselves. First, assemble a collection of words and phrases that represent the information you are interested in organizing. If you’re trying to uncover how people organize concepts, explain the concepts on the cards with a sentence or two. However, if you’re trying to see how people understand a set of terms without necessarily knowing your definitions for them, you can just write them on the cards.

These words and phrases can come from many places: from content you already possess, from terms that the development team uses to describe sections and functions, from interviews with stakeholders or potential users, or from competitive analysis. However, just as with any research technique, your outcomes will only be as good as the prompts you give participants. Donna Spencer recommends ensuring that:

• Your terms make sense to participants. This sounds obvious, but sometimes it’s hard to know in advance if a common technical term will mean nothing to a nonprofessional. If your product has multiple groups of users who have very different vocabularies (for example, students and teachers, or doctors and patients), you might have to create multiple sets of cards.

• Your collection contains some reasonable groups. A test run of your card sort exercise should verify that the items can form some clear groups. If you ask participants to cluster items that don’t share any qualities, you are wasting your time and theirs.

• Your terms are at the same granularity (level of detail). People facing cards titled “forks,” “spoons,” and “silverware,” will be tempted to make a group of “forks” and “spoons,” then label it with “silverware.” While seemingly a successful card sort, you aren’t learning anything new about the kinds of groups people make without your prompting. One tip, from Michael Hawley of the research firm Mad∗Pow, is to select terms one layer down in an information hierarchy from the level you’re interested in. The groups in the card sort should then suggest how to organize and label the more higher-level type of content.

• Your terms represent the most important content or functionality. Talk to stakeholders to make sure that your sort includes the most relevant concepts.

• Your prompts aren’t biasing. For example, repeating the similar words across multiple cards is likely to lead people to group those cards together, even if there’s some evidence that the cards could or should be separated. Additionally, avoid specific brand or product names as labels. Those can bias people toward existing corporate marketing messages or organizational structures. Instead, substitute a more generic description of the product or service.

You can have as few or as many terms as you want. However, the size of a standard card deck (52) strikes a good balance between not providing enough cards to make adequate categories and providing so many that it’s overwhelming. If you have hundreds of terms, consider breaking them up across multiple tests.

Next, write your collection of terms on a deck of sturdy note cards. Depending on the size of your collection, it may save time to enter the names into a word processing program or spreadsheet and print the items onto mailing labels that you then stick to the cards. To minimize distraction, use cards that are identical, except for their titles. Also, it will simplify later analysis if you number each card in one corner at this point.

As always, do a test run with a friendly outsider to diagnose any problems: card titles that are biased or unclear, or perhaps misleading instructions. You will also want to test your chosen analysis tool. Will it handle the amount and kind of data that all the combinations of cards will generate? Will it produce the kind of analytic results that you will need to bring to project stakeholders?

The Sort

After bringing in participants and going through all the initial formalities, introduce them to the concept. Say something along the lines of this:

Each card in this stack has the name of something that you might see on the website. I’d like you to organize the cards into groups that make sense to you. Take as much time as you need. There are no right or wrong groupings. Try to organize all the cards, but not everything needs to belong in a group. You won’t have to provide a reason why cards belong in the same group, so if a group feels right, go with it. You also don’t need to think about how this group might relate to the design of a website. Focus on what makes sense to you, not what may make sense to anyone else.

Provide a stack of Post-it notes, several pens, and a pile of small binder clips or rubber bands. After they’re done grouping, ask them to label the groups if they can, but remind them that not every group necessarily needs a label. Don’t tell participants that they’ll be labeling ahead of time since that tends to bias people to organize based on labels rather than on what they feel are natural groupings. When they’re done, ask them to clip or rubber-band the cards and place the label on the groupings. If you have numbered the cards, you can then just quickly note the numbers for each card on the Post-it label instead of writing or typing each title.

Then ask a brief set of follow-up questions, perhaps something like:

Can you tell me why you made each of these groups?

Which card is the best example of each group?

Which groups were easiest to assemble? Which were hardest? Why?

The answers to these questions will tell you more about the logic of the groupings, which will matter later as you conduct the analysis. Make sure you audio record or take good notes.

Card Sorting to Prioritize

Card sorting is primarily an organizational or naming technique, but you can also use it to understand how people prioritize features.

Label the cards with current and potential features. First, have the participants place the cards into one of four piles describing how valuable they felt the feature would be to them—from “most valuable” to “not valuable.” Then, take the “most valuable” pile and have the participants sort those cards by frequency of predicted use. That allows you to differentiate between immediate interest in a feature and its potential usefulness. Now put the cards into six numerical categories:

0 – Not valuable

1 – Least valuable

2 – Somewhat valuable

3 – Most valuable, rarely used

4 – Most valuable, sometimes used

5 – Most valuable, used often

Then calculate the median value of each card over all the participants. If a number of choices have the same median, calculate the standard deviation of the choices. Lower standard deviations will represent greater agreement among participants about the value of the feature. (See Chapter 12 for definitions of these terms.) Ranking first by user preference, then by agreement, can help teams prioritize features for development.

Card Sort Analysis

You can analyze card sorts either qualitatively and quantitatively.

Qualitative Analysis

When you have the clusters from all the participants, look at them. Copy the clusters to a whiteboard or a spreadsheet. If using a whiteboard, it can be handy to simply refer to each card by number.

By eyeballing the trends in the clusters, you can infer how people understand relationships between the various elements. For example, if people put “News,” “About us,” and “What we like” together, it tells you they’re interested in putting all the information coming from the company’s perspective into a single place. However, if they group “News” with “Latest Deals” and “Holiday Gift Guide,” then maybe they associate time-related information together.

Try three different types of analysis. First, look at the clusters as a whole. Can you discern any logic behind their organization? Don’t treat the clusters literally. People’s existing organizations may not make a scalable or functional architecture. Instead, look for underlying themes tying them together. Pay attention to the cards that people didn’t categorize or that were categorized differently by everyone. What about the card is giving people trouble? Is it the name? Is it the underlying concept? Is it the relationship to other elements?

Second, follow one card at a time through its various groupings. Are there any cards that consistently appear together? These activities are easier if you have entered the cards into a spreadsheet, which you can then sort in various ways.

Third, look at the labels. Do any words or phrases appear consistently to describe the same cards? Are there common groups that nonetheless got very different labels? Looking at the labeling and the labels’ relationships to the clusters underneath can underpin a structure that matches your users’ expectations. Even if not used in an interface or information architecture, the terminology can be useful when explaining the product to potential clients and users.

Throughout this process, pay attention to participant comments. What do they say about their reasons for organizing the cards as they did?

Quantitative Analysis

Percentage

With a few simple formulas, spreadsheets can make it easy to calculate more numerical measures of similarity and difference. For example, it’s helpful to automatically calculate the percentage of times a card appears in one of the standardized categories, or which categories have the most agreed-upon sets of cards. With the help of a spreadsheet, this isn’t difficult, but it does require a detailed setup.

image Instead of creating their own spreadsheets, many professionals rely on spreadsheet templates. Check www.mkp.com/observing-the-user-experience for templates, recommendations for specialized card-sorting software, and other card-sorting resources.

If you performed an open sort, your participants generated their own descriptions for each group of cards. This can present some problems during percentage-based quantitative analysis, since there will likely be multiple groups with similar names that suggest similar concepts. If you preserve those different names, it may be harder to see patterns during qualitative or percentage-based quantitative analysis. In many cases, you will need to start by creating a standard set of labeled categories to which multiple cards can belong.

Are there any cluster of groups with noticeably overlapping labels? For example, take a set of categories with names like “Schedule,” “Program schedule,” “Event program,” and “Event times and places.” At least for the moment, you can probably give all those groups the same label, derived from the most generally used words in the cluster. In this case, it would be likely be “Event schedule.” Don’t worry, you can change the final name of the category later. Just record which category labels you condensed together.

Under no circumstances should you create a group called “Miscellaneous” or “Random stuff.” If your users have created such groups, you should either decide that those types of information are irrelevant to the project (it happens!) or assign them to categories. In information organization, naming a group “Miscellaneous” is a sign of laziness or despair. It’s preferable to spend extra time trying to figure out why some cards are hard to categorize rather than include a quick fix that just adds confusion to your organizing system.

Statistical

Cluster analysis is a branch of statistics that measures the “distance” between items in a multivariate environment and attempts to find groupings that are close together in the variable space. This allows you to uncover groups of objects that are similar across many dimensions, but that may not be obviously alike in any one of those dimensions. Since people have trouble visualizing more than three dimensions, and there are often more than three variables that can determine similarity, the technique is used to “see” clusters that would otherwise go undiscovered.

In card sorting, cluster analysis locates underlying logics by looking at the clusters people make. Are certain things grouped together more often than other things? Are there hidden relationships between certain cards? These are all things that are hard to see by just looking at the cards. Unfortunately, the mathematics of cluster analysis is difficult without special software. Statistical software packages contain modules that can do cluster analysis, but these are expensive and require an understanding of the statistical procedures used in the analysis. Specialized card-sorting software (see sidebar) is a better choice if you’re not a statistics whiz.

Cluster analysis does not remove the human analyst from the process. As Donna Spencer points out, while statistical methods “can help you spot patterns, they don’t allow you to identify why a pattern exists.”

The card-sorting process sheds light on people’s existing understandings and preferences, and it can show subtle relationships that may not be obvious by just examining a list of clusters. It also provides an idea of how concepts relate to each other, since what may seem like a strong relationship when casually examined may turn out to be weaker when actually analyzed.

Card-Sorting Software

While many professionals prefer the hands-on approach of paper index cards, there are a number of software tools for card sorting. You can use them while the participant and moderator are in the same room, but remote card sorting is increasingly popular. Remote card sorting, by contrast, is typically unmoderated. That is, participants perform the card sort on their own time, with no interaction with a researcher. You can reach a much larger number of participants in a shorter amount of time with remote card sorting, but you will not have an opportunity for real-time conversation about sorting decisions.

Note that software packages will produce statistical measures of similarity for you, which can make them especially good choices if you need quantitative analysis and you are in a hurry. You’ll find links to card sorting software on the website for this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset