Index

Note: Page numbers with “f” denote figures; “t” tables; “b” boxes.

0-9

5% standard error, 353b
95% confidence, 353b

A

Actions, 229b
Adaptability, as iterative development benefits, 34
Adobe Connect, 309–310
Advertisers, success criteria for, 24–29
Advertising
recruiting methods using, 103–104
revenue, 25
AEIOU framework, 231–232
Agile software development and user research, 56b–57b
Alibris.com, 78
Alignment diagrams, 518
Amazon, 76–78
American Marketing Association, 122b
American Society for Information Science & Technology, 412
Anonymity, recruiting and, 121–122
Arbitron radio diary, 249–252, 253f
Artifacts, collecting, 236–237
Askildsen, Tormod, 7
Assistant moderator for focus groups, 169, 175
Association for Computing Machinery Special Interest Group on Human-Computer Interaction (ACM SIG-CHI), 123
Attitudinal survey questions and subcategories, 332–333
Attributes of the competitive products, 81
Audiences
field visits, 219
finding, 97–100
for focus groups, 147, 149–151
overdetermining, 100
preparing, 546
profile, 79
for recruiting, 97–108
August, Steve, 244–245
Automatically gathered information, 453–475
clickstream analysis, 463–464
collecting more useful metrics, 466–468
customer feedback, 468–475
analyzing comments quantitatively, 473
collecting, 470–471
customer-facing stakeholders, collaborating with, 473–474
making the most of, 474–475
reading comments, 471–473
sources of, 469–470
exploring the data and observation, 475
for Internet advertising, 464–465
measure, research, design, and test, 465–466
metrics, types of, 459–465
planning for, 455–456
session-based statistics, 462
site-wide measurements, 460–462
software and services, 458b
usage data analysis, 454–468
usage data ethics, 467b–468b
user-based statistics, 462–463
web analytics, 456–458
Average path analysis, 463

B

“Back to basics” approach, 4–5, 7
Barnes and Noble, 76–78
Bartlett, Frederic, 188
BayCHI, 412
Behavior diaries, 246
Behavioral survey questions and subcategories, 332
Belief, truth vs., 374
Bellotti, Victoria, 230
Benchmarking, 89b
Beyer, Hugh, 226–228
Bias
in recruiting, 121
reducing, 85
Bimodal distribution, 363, 364f
Binary questions, avoiding, 133–134
Blogs, 410
Blue-sky brainstorm, usability testing, 295b–296b
Body language
focus group moderation, 166
interpretation in global research, 224
note-taking, 163
Bolt, Nate, 308
Brainstorming survey questions, 331–333
Brand analysis, 75
Budgeted projects, 71–72
Business competitive analysis, 75

C

Card sorting, 201–209
need for, 202
preparing cards, 203–205
qualitative analysis, 206–207
quantitative analysis, 207–209
recruiting, 202–203
software, 209b
Cardiod microphones, 166
Carter, Scott, 248
Characteristic survey questions and subcategories, 331
Checklists
focus group checklist, 163b
scenario checklist, 505b–506b
Chi-square test, 373
Clicks and time spent, 464
Clickstream analysis, 463–464
Closed ended questions, 333
Cluster analysis, 208
CNET, 88–89
relating groups into frameworks, 442–447
stories, importance of, 445–447
Cognitive mapping, 197f
Collage, 189–195
assembling components, 189–191
conducting exercise, 193–194
preparing the components, 191–192
script writing, 192–193
Commodity products, 26
Communicating scenarios, 506–507
Company’s success, in iterative development, 24
open-source product development, 27b–29b
profit, 24
promotion, 26–29
success for advertisers, 24–29
Competition
dimension identification, 80–81
identification of, 76–79
profiles, 78–79
Competitive analysis, See Competitive research
Competitive product interviews and observation, 83–84
Competitive research, 73–93
acting on, 92–93
analyzing, 88–90
duration of, 82b
effectiveness, 74–75
focus groups for, 146–147
Match.com, quick evaluation of, 90–92
methods, 75–87
audience profile, 79
competition identification, 76–79
competition profiles, 78–79
competitive product interviews and observation, 83–84
dimension identification, 80–81
focus groups, 84
product description, 78–79
recruiting, 82–83
surveys, 86–87
techniques, 81–87
usability tests, 84–86
sequence of steps, 75
surveys, 332
Conclusion section, 542, 557
appendices, 544
testing reports, 545
Confidence interval, 370, 371f
Conflicts, internal
triggering, 550
Contact information for surveys, 343, 349
Context scenarios, 501–502
Contextual inquiry, 232–233
as follow-up, 377
Conversions, 464–465
Cooper, Alan, 482
Corporate edict, 30
Costs, of professional recruiters, 125–129
Cross-tabulation, 365
Cultural probes, 254b–255b
Customer feedback, 468–475
analyzing comments quantitatively, 473
collecting, 470–471
customer-facing stakeholders, collaborating with, 473–474
making the most of, 474–475
reading comments, 471–473
sources of, 469–470

D

Dalal, Brinda, 238–241
Dandavate, Uday, 188
“Debriefing”, 425
Demographic attributes
in audience profiles, 79
in persona descriptions, 489–490
sample survey questions on, 335–336
in advertising, 24–29
in focus group recuiting, et cetera, 115
Demographic persona descriptions, 489–490
Demographic factors, 79
Descriptive survey goals, 331
Design scenarios, 501–502
Design springboards, 533
Desires as persona attributes, 498–499
Development team, 563–564
Dialogic techniques, 181–183
photo elicitation, 181–183
Diary studies, 243–271
choosing your platform, 257–261
components, assembling, 261–268
conducting, 268–270
cultural probes, 254b–255b
defined, 243
designing, 248–268
duration of, 244–247
follow-up activities, 270
incentives, 269
inventing good exercises, 250–257
images and video, incorporating, 260b–261b
managing responses, 268–270
preparation, 247–248
pretesting, 247, 268
recruiting, 247–248
reminders for participants, 270–271
sampling rate for, 249–250
schedule, 248t
self-reporting issues, 256b–257b
trend extraction, 317–319
usability test diaries, 245–247
Dimensions for comparison, defining, 80–81
Discussion guide for focus groups, 154–160, 162t
Doblin Group, 231–232
Document camera, 300b–301b
Documentaries, 533
Documentation of interviews, 138b–139b
Domain learning, 223–224
Duration bias, 355

E

eCalendar Power Users, 108–109
e-commerce sites, 565
Editing and ordering survey questions, 341–343
eLab, 231
Elicitation activities in diary studies, 248, 250
Elicitation, conducting, 186
Elliott, Ame, 399
Email
way to manage diary studies
recruiting, 102b
survey invitations, 357
surveys based on, 357
Emotional topics in focus groups, 174
Emphasizing users’ perspectives in reporting, 547
End users, success for, 22–24
desirability, 23–24
efficiency, 22–23
functionality, 22
usability and design, 23b–24b
Environment, 490
attention to, 233–235
Errors in surveys
checking, 347
estimating, 369–371
measurement errors, 372–373
Ethnography, 217b–218b
Evaluator for usability test, 278, 326
Expectation bias, 356
Experience models, 522–528
making, 523–528
research required for, 524–525
starting to draw, 525–528
using, 528
Expert/novice relationship, field visits, 228
Explanatory survey goals, 331
Exploratory focus groups, 146
Extending the reach of research, 556–557
Extreme/lead user strategy, 220
Eye tracking, 310–314

F

Fair compensation, 399b–400b
Feature audit, 80–81
Feature prioritization focus groups, 146
Features
prioritization exercise, 281b–282b
Feedback activities, 248, 250
Feedback questions, 99
Field interviews and observation, 395–400
building the field team, 395–398
building trust, 398
liaisons and guides, 396–397
research assistants, 397–398
spending time in field, 398–400
translators and moderators, 397
Field visits, 211–242
AEIOU framework, 231–232
attention to the environment, 233–235
collecting artifacts, 236–237
defined, 213–215
domain learning, 223–224
establishing a relationship, 226–229
ethnography, 217b–218b
expectations, expressing, 224
extreme/lead user strategy, 220
with a farmer, 214f
fieldwork way, 241b–242b
focus, 231–237
goal of, 213–215
introductory conversation, 229–230
main observation period, 230
note taking, 237–238
observational research, 238–241
participants selection, 219–220
practical stuff, 224–226
process, 219–237
recruiting, 220–221
scheduling, 221–223
structuring, 229–231
tour of a home office, 214f
typical user strategy, 220
uses of, 215–219
videotaping, 225b
workarounds, 235–236
wrap-up, 230–231
Fielding a survey
bias reduction in, 353–356
defined, 349
invitations, 356–359
sample and sampling frame, 349–352
sample size, 352–353
telephone, in-person and mail surveys, 359
Fieldwork way, 241b–242b
Flickr, 26–27, 26f
Flip HD camera, 26–27
Flow Interactive, 315
Flowcharts, 444
Focus groups, 84, 141–178, 239–240
asking questions, 171–173
assistant moderator, 169
breaks, 162t
checklist, 163b
common problems, 173–175
competitive, 146–147
conducting, 147–160, 162–178
discussion guide, 154–160, 162t
dominant participant, 170
ejecting participant, 174–175
emotional topics, 174
exploratory, 146
for feature prioritization, 146
groupthink, 173
hiring experts, 177–178
latecomers, 175b
limitations of, 144–145
moderating the discussion, 169–171
moderators, 166–168
observers, 175–177
offensive ideas, 174
online, 146
overview, 141
participant knowledge about, 152
physical layout, 162–166
prescreening participants, 150–151
problems for interviewing, 136–138
problems for survey, 373
questions, 154
recruiting for, 151–153
remote observation of, 165b
schedule for, 148, 161
scope of research, 148, 153–154
seating order, 164b
size of group, 153
strength of, 142–144
surveys vs., 145
tangents, 171
target audience for, 147, 149–151
timing for, 142
topics, 147–149
trend explanation, 147
types of, 145–147
for user experience design, 143
videotaping, 157, 165
Follow-up activities, 270
Follow-up interviews, 223
Follow-up questions, 172
Food and drink, for focus groups, 164
Formal reports, See Reports, formal
Freeman, Beverly, 244
Friends and family recruiting, 103
for focus groups, 151
Friends and family usability test, 11
Funnel analysis, 463–464

G

Gaze plot, 313f
Generative techniques, 188–201
toolkits, 189–190
Global and cross-cultural research, 385–403
analyzing the data, 402
building the field team, 395–398
building trust, 398
course corrections, 403
field interviews and observation, 395–400
research program, building, 403
international travel, 391
local research providers, utilizing, 391–392
multilingual research, 392–393
recruiting, 393–395
remote research, 391
research planning, 388–391
research plans, challenges for implementing, 402
spending time in field, 398–400
surveys, 400–401
Goals
descriptive, 331
explanatory, 331
for surveys, 331
Goals for research planning, 48–55
collecting issues and presenting them as goals, 49–52
expanding general questions with specific ones, 54–55
prioritization exercise, 52b
prioritizing goals, 52–53
rewriting goals as questions, 53–54
tips, 55
Goodwin, Kim, 221, 228
Google Analytics, 458, 459f
GoToMeeting, 299
Group discussions, 171
Groupthink, 173

H

Hawley, Michael, 73–74
“Heat map” graphics, 311, 312f
Heavyweight data analysis, 450
Hidden understandings, surfacing, 216
Hierarchical task analysis (HTA), 511
Holzblatt, Karen, 226–228
HotBot diary, 265, 266b

I

Image elicitation, 186, 188t
Impressions as metrics, 464
Incentives
for diary studies, 269
for research participants
choosing, 118b–119b
for field visit, 222
no-shows and, 120
for surveys, 348–349
Independent variable, identifying, 365
Individuality, 26–27
Informal reporting, 532
Informed consent, 126b–128b, 399b–400b
gaining informed consent, 126b
In-person surveys, 359
Insightful user research, 477–530
demographic descriptions, 489–490
desires, 498–499
environment, 490–491
lifestyle/psychographic, 491–492
personal description, 497
roles, 492–498
Instructions
for diary studies, 262–266
for focus groups, 176b
general, for surveys, 343–345
International travel, 391
Internet search, 76
Interpretation session, 552, 554–556
empathy, cultivating, 555
identifying business implications of research insights, 555
presentation styles, 554
ways to collaboratively solve problems, 556
Interruption invitation, 357–359
Interviewer–interviewee relationship, 227
Interviewing, 129–138
breaking the rules, 138
common problems, 136–138
composing nondirected questions, 132–134
documenting interviews, 138b–139b
neutral interviewer for, 131–132
nondirected, 130–136
phases, 129
running nondirected interview, 134–136
structure of interviews, 129–130
for usability testing, 296–308See also specific types
Introductory conversation, 229–230
Introductory letter, 261–262
Intuit’s website, 101f
Invitations
bias, 355
email, 357
interruption, 357–359
links for, 356–357
random selection, 358b
for research participants, 116–117
survey bias from, 354
for survey participants, 356–359
Iterative development, 21–44, 32f
adaptability, 34
benefits of, 33–35
company’s success, 24
profit, 24
promotion, 26–29
corporate edict, 30
creation step, 31
definition step, 31
end-user’s success, 22–24
desirability, 23–24
efficiency, 22–23
functionality, 22
examination step, 30–31
flexibility, 33
for improving the product, 34–35
for Internet-based products, 35
iterative spiral, 31–33
need for, 29
overview, 29
for product identity, 35
problems, 30–31, 35–36
sample research program in, 37f
scheduling service example, 38–44
shared vision, 34–35
system of balance in, 29–36
user research and, 36
waterfall development, 36–38
waterfall method, 31, 31f

J

Jordan, Brigitte, 238–241

K

Khalil, Chris, 260
Kindle device, 77–78, 79b
Klein Laura, 466
Knudstorp, Jørgen Vig, 4–5
KonfiKits, 254b–255b
Krueger, Richard A., 155, 164b

L

Latecomers in focus groups, 175b
LEGO City, 7
LEGO Group, 4–10
“back to basics” approach of, 4–5
Lifestyle/psychographic persona attributes, 491–492
Lightweight data analysis, 448–450
Likert scales, 339
Local research providers, utilizing, 391
Long-term value, building, 567
LookSmart, 85–86
“Lusers”, 568

M

Mail surveys, 359
Main observation period, 230
Maintenance of research plan, 72
“Managing Expenses” online diary study, 244, 246f
Mankoff, Jennifer, 248
Mapping, 195–202
social, 198–200
spatial, 195–198
Maps, 442
Mariampolski, Hy, 212
Market research, 239–240
Marx, Karl, 131
Master/apprentice model, 227
“Mastery” experiences of, 5–6
Match. com, quick evaluation of, 90–92
Matchmaking, 76
McKee, Jake, 6
Mean, 362
calculating, 362
defined, 362
mode vs., 363
Measure, research, design, and test, 465–466
Measurement errors in surveys, 372–373
Median, 363–364
Metaphor elicitation, 189
Metrics, 459–465
clickstream analysis, 463–464
for Internet advertising, 464–465
session-based statistics, 462
site-wide measurements, 460–461
user-based statistics, 462–463
Microsoft Xbox, 348
Micro-usability test, 12–17
creating tasks that address goals, 14–15
defining audience and their goals, 13–14
right people, 15
watching, 15–17
Mixing digital and paper coding, 438–447
Mode, 362–363
calculating, 362–363
defined, 362–363
mean vs., 363
Moderating usability tests, 302–305
Moderators, 166–171
asking questions, 171–173
hiring experts, 177–178
Moen, 211
Monroe, Jeff, 496, 497f
Mood boards, See Collage
Mortality tracking for surveys, 347
Muller, Thor, 471
Multilingual research, 392–393
interviews in translation, 392

N

Nano-usability test, 12
Neighbors, recruiting, 103
Netflix website, 212
News site survey example, 380–383
Newspaper style for report, 544b–545b
Niche competitors, 78
Nielsen, Jakob, 278
Nondirected interviewing, 130–136
composing questions for, 132–134
defined, 131
neutral interviewer for, 131–132
running an interview, 134–136
Nondisclosure agreements, 222
Non-responder bias, 354
Nook e-reader, 77–78
Normal distribution, 362–363
No-shows, avoiding, 120–121
Note taking, 237–238
Numbers, using, 547–548

O

Object-based techniques, 179–209
card sorting, 201–209
collage, 189–195
dialogic techniques, 181–183
generative techniques, 188–201
mapping, 195–202
photo elicitation, 181–183
script writing, 183–188
time for, 180–181
Observational research, 238–241
Observations
collecting, 314–316
organizing, 317
Observers
of focus groups, 165b, 175–177
of usability tests, 305–307
One-on-one mapping, 196, 198
Ongoing research
surveys, 376–383
Online dating, 76, 77f
user experience landscape, 90f
Online diary study, 258–261
Online screeners, 113–114
Open-ended questions, 113, 154–155, 333
diary studies, 267
Open-source product development, 27b–29b
Open-source software, user research for, 27b–29b

P

Paper diary study booklet, 245f, 257–258
Participant-chosen images, 190–191
Personal description, 497
Personal photography site, 223
Personas, 482–489
creating, 485–486
analyzing the data, 485–486
defining, 486–489
prioritizing attributes and patterns, 486
provisional, 484b
research for, 483–484
internal interviews, 483
market research review, 483–484
research with participants, 483
usage data and customer feedback review, 484
Photo elicitation, 181–183
assembling images for, 182–183
Photographing interviews, 138b–139b
Physical environment, examining, 233–234
Physical layout
for focus groups, 162–166
for usability testing, 296–302
Picture-in-picture video documentation, 297f
Portfolio of core competencies, 88
Portfolio of frequent foibles, 88
Portigal, Steve, 135–136
Post-its notes, 163, 199, 438, 440, 448, 451
Powells.com, 78
Pre/post surveys, 378–379
Preliminary interview in usability testing, 290, 290b
Presentations, 545–552
common problems in, 549–552
Pretesting, 348
diary studies, 268
reports, 545
survey, 348
Pricing usability, 565–567
Prioritization, 81
Privacy, providing, 231b
Probe expectations, 303
nonverbal cues, 304
Problem scenarios, 501–502
Process/purchase diaries, 246
Product description, 78–79
Product development, 27b–29b
Product manager, 49–50
Professional recruiters, 122
costs, 125–126
finding, 122–123
requirements, 124–125
services provided by, 123–124
Professional terminology, using, 547
Published information, 405–421
background, 414
budget, 417
case studies, 416
client’s role, 416–417
core competencies, 416
independent analysis, 406–407
marketing research, 409
process, 416–418
setting expectations, 418–421
project description, 415
project summary, 414
publications and forums, 409–410
questions, 415
references, 417
request for proposal, 414
schedule, 416
specialists, hiring, 410–414
casual method, 413
formal method, 413–414
timing, 410–411
traffic/demographic information, 407–409

Q

QualiData, 211
Qualitative data, analyzing, 423–451
capturing and discussing initial insights, 425–426
digital, 438–441
finding patterns and themes, 428–436
heavyweight data analysis, 450
ideal process for, 424–436
lightweight data analysis, 448–450
mixing digital and paper coding, 438–441
paper, 436–438
preparing the data, 426–428
sorting the data into groups, 428–436
transcription, 426b–427b
typical analysis plans, 447–450
Question instructions, 344
Questionnaire forms, 266–268

R

Random error, 373
Readability Graph, 263b
Recruiting, 82–83, 95–108, 393–395
anonymity and, 121–122
bias in, 121
building and space preparation, 122
card sorting, 202–203
defined, 95–96
demographics, 98
diary studies, 247–248
by email, 102b
from existing contacts, 100–102
for field visit, 220–221
finding audience, 97–100
for focus groups, 151–153
friends and family for, 103
friends and family for, 151
as full-time job, 96–97
importance of, 97–98
incentives for participants, 118b–119b
no-shows, avoiding, 120–121
pitfalls, 119–122
professional recruiters for, 122
cost, 125–126
finding, 122–123
needs of, 124–125
responsibilities of, 123–124
questionnaire for building database, 106
schedule for, 96t
scheduling for participants, 115–119
screeners for, 106–108
target audience, 96
time requirements, 96
tips, 105–106
for usability testing, 275–280
using commercial recruiting service, 104
wrong group, 119–120
Recruiting services, 104
Refined surveys, 378
Relationships with participants establishing, 226–228
Reminders, for diary study participants, 269–270
Remote research, 115, 308, 391
remote usability testing, 308–310
Reports, formal
audience for, 531, 533–535
creation, 536–545
conclusions section, 542
executive summary section, 537
interesting quotations section, 539b–540b
main themes, 541–542
organization, 537–541
participant profiles, 541
picking a format, 536
procedure section, 537–541
research outcomes, 542
newspaper style for, 544b–545b
participants’ privacy and confidentiality, 541b
preparation, 533–536
process knowledge, 535–536
for surveys, 345–346
testing, 545
time requirements for preparation of, 533See also Presentation
Representing activities and processes, 507–521
mapping processes, 520–521
task analysis, 508–520
data analysis, 511–515
flowchart diagrams, 515–517, 516f
grids, 517
hierarchical, 513–515
representation, 515–520
in spiral development process, 508
studying tasks, 509–511
task decomposition, 511–513
towers, 518–520
traditional, 508
Request for proposals (RFP), 413–414
Requirements gathering, 216
Research planning, 47–72, 47b–48b, 388–391
after release, 58–59
Agile software development and user research, 56b–57b
asking questions across multiple projects, 63–64
budget, 65–66, 71–72
challenges for implementing, 402
checking assumptions, 388–390
choosing among the techniques, 60b–61b
choosing the approach, 391
deliverables, 72
design and development, 59
development and design, 58
early design and requirement gathering, 57–58
focus groups, 69
focusing on study, 390
format of, 64–65
goals, 48–55
collecting issues and presenting them as, 49–52
expanding general questions with specific ones, 54–55
prioritization exercise, 52b
prioritizing, 52–53
rewriting, as questions, 53–54
tips, 55
immediate user research, 68
integrating research and action, 55–64
issues, 67
maintenance, 72
organizing research questions into projects, 59–63
requirement gathering, 59
schedule, 70–71
parallel projects, 63
short-term and long-term goals in, 72
site visits, 69–70
starting in the middle, 58–59
starting research in the beginning, 57–58
structure, 68–71
summary, 67
usability testing, 68–69
user profiling, 69–71
Research questions
asking across multiple projects, 63–64
binary, 133–134
common problems, 136–138
rewrite goals as, 53–54
simplicity in, 136
Research report organization, 537–542
conclusions, 542
executive summary, 537
main themes, 541–542
participant profiles, 541
procedure section, 537–541
research outcomes, 542
usability test report, 319
Research-driven workshops, 552
Researcher-generated photographs, 184
Resources distribution, understanding, 234–235
Respect for ownership of ideas, 399b–400b
Response rate for surveys, 347
Restating answers to interview questions, 134
Return on investment (ROI), 565
Revelation, 244–245
RFP (request for proposal), 238
Roles as persona attributes, 497–498

S

Sample, 349
bias reduction, 353
sampling frame, 349–352
size of, 352–353
Satisfaction surveys, 328
Scenarios, 501–507
checklist, 505b–506b
communicating scenarios, 506–507
deciding what stories to tell, 502–503
design scenarios, 505b–506b
formation of, 502–507
using, 507
when to use, 501–502
writing, 503–506
Schedule
for field visit, 221–223
for focus groups, 148, 161
for surveys, 329–330
usability testing, 276t
Scheduling research participants, 115–119
confirmation and reconfirmation, 117–119
double scheduling, 120
invitation, 116–117
scheduling window, 115
Scheduling service example, 38–44
Schemas, 188
Screeners for recruiting, 106–108
general rules, 106
importance of, 106
telephone screener example, 108
Script for usability tests, 287–296
Script writing, 183–188, 192–193
Self-reporting, 256b–257b
Self-selection, 355
Session-based statistics, 462
Shiple, John, 74, 561
Side-by-side comparisons, 89
Site-wide measurements, 460–462
Situations, representing, 501–507
Snap.com, 85–86
Social mapping, 198–200
Sofa-buying analysis
diagram, 516f
simple grid for, 518f
swimlane diagram for, 521f
towers representation for, 519f
Spatial mapping, 195–198
Spectrums, 445
Spotter diaries, 245–247
Spreadsheets, 24, 77, 204, 206, 207b, 280, 326
Stakeholders, 562
Standard deviation, 370
Standard error, 369–370
Stealth problems reporting, 551–552
Surveys, 86–87, 327–383
accurate results, 329b
analysis and interpretation, 359–360
analyzing responses, 359–376
attitudinal questions and subcategories, 332–333
behavioral questions and subcategories, 332
bias reduction in, 353–356
bimodal distribution, 363
brainstorming questions, 331–333
characteristic questions and subcategories, 331–332
common problems, 373
comparing variables, 365–369
competitive research, 332
contact information, 343
contextual inquiry and follow-up, 377
counting results, 360–365
cross-tabulation, 365
defined, 327–328
descriptive goals, 331
drawing conclusions, 373–376
editing and ordering questions, 341–343
email, 357
error checking, 347
error estimation, 369–371
example for news site, 380–383
explanatory goals, 331
fielding, 329–359
follow-up qualitative research, 376–377
free web survey tools, 346b–347b
general instructions, 343
goals, 331
in-person, 359
incentive for, 348–349
instructions, 343–345
invitations, 356
laying out the report, 345–346
mail, 359
mean calculation, 362
measurement errors, 372–373
median calculation, 363–364
missing data for, 364b
mode calculation, 362–363
mortality tracking, 347
ongoing research, 376–383
pre/post, 378–379
profiles, 328
questions, 333–341
random selection, 358b
refined, 378
response rate, 347
sample and sampling frame, 349–352
sample size, 352–353
satisfaction surveys, 328
schedule for, 329–330
sweepstakes laws and, 344b
systematic error in, 373
tabulating results, 356
telephone, 359
testing, 348
timing for, 328–329
tracking surveys, 377–378
tracking timing of responses, 356
and usability testing, 347, 377
value surveys, 328
web survey tips, 346–347
writing, 330
Suvak, Jack, 241
Sweepstakes laws, surveys and, 344b
System of balance in iterative development, 29–36
adaptability, 34
benefits of, 33–35
flexibility, 33
iterative spiral, 31–33
problems in, 30–31, 35–36
shared vision, 34–35
Systematic error, 373

T

Target audience, See Audiences
Task analysis, 508–520
analyzing the data, 511–515
flowchart diagrams, 515–517, 516f
grids, 517
hierarchical, 513–515
representing tasks, 515–520
in spiral development process, 508
studying tasks, 509–511
task decomposition, 511–513
towers, 518–520
traditional, 508–509
Task decomposition, 511
Task examples, 286t–287t
Task granularity, 229b
Task-based interview, 292, 293b
Tasks, 498
Taxonomies, 442
Telephone screeners for recruiting, 108
Telephone surveys, 359
Temptation, avoiding, 194–195
“Terminate” instruction, 113t
Tier 1 competitors, 77–78
Tier 2 competitors, 78
Timbuk2, 474–475
Time lines, 443–444
Time aware research, 308
Timing
bias, 354–355
for surveys, 328–329
TiVo digital video recorder, 26–27
Toolkits for generative techniques, 189–190
Topics for focus groups, 147–148
Toutikian, Doreen, 254b–255b
Tracking surveys, 377–378
Trend explanation, 147
from diary test data, 142
from usability test, 317–319
Tulathimutte, Tony, 308
Two-by-two matrixes, 445
Typical analysis plans, 447–450
heavyweight data analysis, 450
lightweight data analysis, 448–449
Typical user strategy, 220

U

Unix operating system, 23b–24b
“Usability maturity,”, 559
Usability testing, 11, 84–86, 273–326, 377
analyzing, 314–319
anatomy of test report, 319–326
as survey follow-up, 377
choosing features, 280–283
collecting observations, 314–316
conducting the interview, 296–307
creating tasks, 283–287
defined, 11
estimating task time, 285b
evaluation instructions, 291b
eye tracking, 310–314
feature prioritization exercise, 281b–282b
floaters, 121
fork task examples, 286t–287t
friends and family test, 11
micro-usability test, 12–17
mobile devices, 300b–301b
moderation, 302–305
nano-usability test, 12
organizing observations, 317
physical layout, 296–302
preparation, 275–296
recruiting, 275–280
remote, 308–310
remote external observation of, 299f
sample spreadsheet layout, 280t
schedule, 276t
script for, 287–296
for surveys, 347
testing environment and recruiting criteria, 321–323
time for, 273–275
tips and tricks, 307
trend extraction, 317–319
Usage data analysis, 454–468
clickstream analysis, 463–464
collecting more useful metrics, 466–468
for Internet advertising, 464–465
measure, research, design, and test, 465–466
metrics, types of, 459–465
planning for, 455–456
session-based statistics, 462
site-wide measurements, 460–461
software and services, 458b
usage data ethics, 467b–468b
user-based statistics, 462–463
web analytics, 456–459
Usage data ethics, 467b–468b
Usage diaries, 245–247
User experience landscape, 90f
User research, 3
assumptions about, 216
in iterative development, 36–38
waterfall development, 36–38
User-based statistics, 462–463
User-centered corporate culture, 559–569
current process, working with, 560–567
development team, 563–564
difficulties creating, 568–569
encouraging user experience roles, 563–564
following and leading, 569
hostility towards creating, 568
integration, 561
involving stakeholders, 562
long-term value, building, 567
measurement of impact, 564–565
measuring effectiveness, 564–565
momentum and resistance to, 568
preparation for failure, 561b–562b
preparing for failure, 561b–562b
pricing usability, 565–567
research visibility, 564
ROI calculation, 565
starting small and scaling up, 561–562
User-generated photographs, 182
“Usability maturity”, 259

V

Value survey, 328
Values, 499–501
Veen, Jeff, 563–564
Video, usage of, 548b–549b
Videotaping
for field visit, 225b
focus groups, 157, 165
interviews, 138b–139b
usability tests, 297–298
Voice messaging, 258

W

Wasson, Christina, 231–232
Waterfall development, 36–38
Waterfall method, 31f
problems in, 31
sample research program in, 37f
Web analytics, 456–459
software and services, 458b
Web directories, 85–86
Web server log files, 335–336
Web sites, recruiting using, 98
Web survey tools, 346b–347b
Web-based conferencing software, 299
Web-based survey, 347, 357
error checking, 347
functionality, 347
mortality, 347
response rate, 347
timing, 347
usability test, 347
Website home page redesigns, user experience research into, 561
Wikimedia, 319b–320b
Wikipedia, 275
Wikitext, 323b–325b
Wilson, Chauncey, 173
Workarounds, 235–236
Workshop, 552–556
common interpretation session activities, 554–556
empathy, cultivating, 555
identifying business implications of research insights, 555
presentation styles, 554
ways to collaboratively solve problems, 556
Wrap-up
field visits, 230–231
in focus group discussion guide, 161–178
in interviewing, 130
usability testing, 295, 295b–296b
Writing survey questions, 333–341
Written report, 532

Y

Yahoo! Research, 93

Z

Zaltman, Gerald, 189
Z-test, 373
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset