accepted build cycle, testing during, 198-199
accessibility, as QoS (quality of service), 69
actors, personas versus, 65-66
Actual Quality versus Planned Velocity graph, 97-98
ad hoc testing. See exploratory testing
adapting process (iterative development), 35
adaptive approach, plan-driven approach versus, 38-39
adaptive projects, 51
advocacy groups, viewpoints of risk, 36-37
Agile Manifesto, 2
agility, 3
AIB (Applied Integration Baseline), 122
Application Designer, 124
Applied Integration Baseline (AIB), 122
architecture, 116
baseline architecture, 119-122
citizenship, 128
Design for Operations, 128-131
QoS mindset and, 126-128
reference architectures, 122-123
SOA (service-oriented architecture), 116-119
VSTS and, 124-127
troubleshooting, 224-226
validating, 121
in VSTS, 116
assessment data in bug reports, 216-218
attractiveness, as QoS (quality of service), 69
audit trails, 109-111
auditability, 41
auditors, 41
Austin, Robert, 83
automated build system, 156-161
automated code analysis, 138-139
automated scenario testing, 172-175
automated testing, 201
availability, as QoS (quality of service), 70
baseline architecture, 119-120
reference architectures, 122-123
refining, 121-122
batch sizes, 32
Beizer, Boris, 195
bluffing, 233
Boehm, Barry, 28
bottom-up estimation, 100
branching (source control), 156
Broken Windows theory, 193
Brooks, Fred, 30
bug find rate, 233-235
Bug Rates graph, 93-94
bugs. See also prioritizing bugs; testing
capacity to handle, 228-229
lifecycle of, 206-210
reactivation rate, troubleshooting, 232
writing bug reports, 210-212
assessment data in, 216-218
objective data in, 214-216
plans in, 218
SOAP analogy, 212-213
subjective data in, 213-214
Bugs by Priority graph, 95-97
build failures, troubleshooting, 229-230
build reports, 158-159
build verification tests (BVTs), 148-150, 160, 196-198
business process models, 52
Buwalda, Hans, 194
BVTs (build verification tests), 148-150, 160, 196-198
Capability Maturity Model Integration. See CMMI
change requests, 207. See also bugs
changesets, 153
check-in cycle, testing during, 196
check-ins, associating work items with, 109-111
checking in files (source control), 153-155
citizenship, 128
CMMI (Capability Maturity Model Integration), 3
MSF for CMMI Process Improvement, 29
Cockburn, Alistair, 35
code analysis. See automated code analysis
code churn, 21
code coverage, 186-187
overlaying, 21
troubleshooting, 231-232
in unit testing, 143-145
code reviews, 138-141
commercial-off-the-shelf software (COTS), 5
common cause variation, 81-82
compatibility, as QoS (quality of service), 69
competition, global competition, 3
compliance, 3
component integration tests, 147
concurrency, as QoS (quality of service), 69
configuration management. See source control
configuration testing, 146-148, 189-192
conformance to standards, as QoS (quality of service), 70
contextual inquiry, 58-59
continuous integration, 160-161
contract-first design, 118-119
control theory (iterative development), 31
COTS (commercial-off-the-shelf software), 5
coverage. See code coverage
Csikszentmihalyi, Mihaly, 8
cumulative flow diagram, 88-90
customer validation of scenarios, 62-64
daily activities, instrumenting, 17-21
daily build cycle, testing during, 196-198
daily builds, 158
data sets, unit testing, 146-147
data warehouse. See work item databases
dead reckoning, 99
Declaration of Interdependence, 5
defects. See bugs
DeMarco, Tom, 102
Deming, W. Edwards, 81
Deployment Designer, 125-129
descriptive metrics, prescriptive metrics versus, 83-85
Design for Operations, 128-131
development
quality issues, 134-135
programming errors, 135, 138-141
troubleshooting, 229-233
value-up approach to, 134
discoverability, as QoS (quality of service), 69
discovery, testing as, 194-195
dissatisfiers, 66-67
Kano Analysis, 71-75
documentation. See required documentation
duplicate bugs, 217
ease of use, as QoS (quality of service), 69
economics (iterative development), 30
efficiency, as QoS (quality of service), 69
Einstein, Albert, 1-2
elevator pitch, 51
EQF (Estimating Quality Factor), 102-103
errors. See bugs
estimating iterations, 98
bottom-up estimation, 100
estimation quality, 102-103
refinements to, 100-102
retrospectives, 103-104
top-down estimation, 99
Estimating Quality Factor (EQF), 102-103
estimating tasks, 4
value-up approach, 5
flow, 8-13
work-down approach versus, 6
work-down approach, 4
value-up approach versus, 6
Excel, accessing metrics warehouse from, 223
exciters, 66
Kano Analysis, 71-75
explicit sign-off gates, implicit sign-off gates versus, 40-41
exploratory testing, 194
false confidence in testing, 195
fault feedback ratio, troubleshooting, 232
fault model, 187
fault tolerance, as QoS (quality of service), 69
flow, 8-10
transparency, 11-13
work-down approach versus, 10-11
focus (iterative development), 30
focus groups, 58
gaps in testing, 184-189
geographic boundaries, fitting process to project, 45
global competition, 3
goals of scenarios, 56-57
“good enough” testing, 192-193
governance model, 40-41
granularity (iterative development), 33
historical estimates, tracking, 102
implicit sign-off gates, explicit sign-off gates versus, 40-41
indifferent features (Kano Analysis), 72
installability, as QoS (quality of service), 70
instrumented profiling (performance profiling), 152
instrumenting daily activities, 17-21
interoperability, as QoS (quality of service), 71
iron triangle, 10-11
iteration cycle, testing during, 199-200
iterations, 31
estimating, 98
bottom-up estimation, 100
estimation quality, 102-103
refinements to, 100-102
retrospectives, 103-104
top-down estimation, 99
test objectives, 192-193
triage and, 108-109
iterative development, 30-32
adapting process, 35
control theory, 31
economics, 30
focus, 30
granularity, 33
length of, 32
motivation, 31
prioritization, 33-34
reasons for using, 30-31
risk management, 30
stakeholder involvement, 31
job descriptions, project roles versus, 43
Kano Analysis, 71-75
Kerth, Norman L., 103
late delivery, 11-13
length of iterations, 32
lifecycle of bugs, 206-210
load testing, 177-182
localizability, as QoS (quality of service), 69
Logical Datacenter Designer, 125-126
maintainability, as QoS (quality of service), 70
manageability, as QoS (quality of service), 70-71
managed code analysis, 139
managing projects. See project management
manual code reviews, 140-141
McConnell, Steve, 11
metrics
multidimensional metrics, 20
prescriptive metrics versus descriptive metrics, 83-85
team efficiency, 201-202
metrics warehouse, 18
accessing from Excel, 223
for project management, 86-87
troubleshooting with
development practices, 229-233
testing bottleneck, 236-240
tests pass, solution doesn’t work, 233-236
underestimation, 223-230
usefulness of, 222-223
Microsoft Project, 16
monitorability, as QoS (quality of service), 70
MoSCoW (prioritization scheme), 33
motivation (iterative development), 31
MSF for Agile Software Development, 28-29
change requests, 207
reports, 88
special cause variation, 82
MSF for CMMI Process Improvement, 28-29
audit trails, 110
change requests, 207
governance model, 41
special cause variation, 82
MSF for CMMI Software Improvement, reports, 88
multidimensional metrics, 20
must-haves (Kano Analysis), 72
objective data in bug reports, 214-216
operability, as QoS (quality of service), 70
operations, Design for Operations, 128-131
organization, prescribed versus self-organization, 43
organizational boundaries, fitting process to project, 45
outsourcing/offshoring, 3
overlaying code coverage, 21
pain points, 57
paradigm shifts, 2-4
paradigms. See value-up approach; work-down approach
performance
architecture and, 127
as QoS (quality of service), 68
performance tuning, 149-152
personas, 55-56
actors versus, 65-66
researching, 57-59
Personify Design Teamlook, 16-17
Pesticide Paradox, 195
plan-driven approach, adaptive approach versus, 38-39
plans in bug reports, 218
Poincaré, Henri, 2
portability, as QoS (quality of service), 70
post-mortems, 35
prescribed organization, self-organization versus, 43
prescriptive metrics, descriptive metrics versus, 83-85
prioritization (iterative development), 33-34
prioritizing bugs, 95-97
triage, 104-108
example, 104-106
iterations and, 108-109
red line, 106-108
privacy, as QoS (quality of service), 68
process, fitting to project, 21-22, 37
adaptive approach versus plan-driven approach, 38-39
auditability and regulatory concerns, 41
geographic and organizational boundaries, 45
implicit versus explicit sign-off gates and governance model, 40-41
prescribed versus self-organization, 43
project switching, 43-45
required documentation versus tacit knowledge, 39
Process Template, 22
product backlog, 12-13
production environment, testing for, 189-192
profiling (performance tuning), 149-152
programming errors, development quality and, 135, 138-141
project cycle, testing during, 200
project management. See also projects
audit trails, 109-111
estimating iterations, 98
bottom-up estimation, 100
estimation quality, 102-103
refinements to, 100-102
retrospectives, 103-104
top-down estimation, 99
metrics warehouse, 86-87
prescriptive versus descriptive metrics, 83-85
questions to answer, 86-87
Actual Quality versus Planned Velocity graph, 97-98
Bug Rates graph, 93-94
Bugs by Priority graph, 95-97
Project Velocity graph, 90-91
Quality Indicators graph, 92-93
Reactivations graph, 95-96
Remaining Work graph, 88-90
Unplanned Work graph, 91-92
triage, 104-108
example, 104-106
iterations and, 108-109
red line, 106-108
variation, 81-82
project portal, 41-42
project smells, 19
project switching, 43-45
Project Velocity graph, 90-91
projects. See also project management
adaptive projects, 51
adaptive approach versus plan-driven approach, 38-39
auditability and regulatory concerns, 41
geographic and organizational boundaries, 45
implicit versus explicit sign-off gates and governance model, 40-41
prescribed versus self-organization, 43
project switching, 43-45
required documentation versus tacit knowledge, 39
requirements. See requirements
strategic projects, 51
troubleshooting. See troubleshooting with metrics warehouse
when to build, 3-4
QoS (qualities of service), 50, 67-71
architecture and, 126-128
Kano Analysis, 71-75
testing, 177-183
quality. See also bugs
in development, 134-135
programming errors, 135, 138-141
requirements, 135-137
velocity versus, 97-98
Quality Indicators graph, 92-93
ranking order, 35
reactivation rate, troubleshooting, 232
Reactivations graph, 95-96
recoverability, as QoS (quality of service), 70
red line (triage), 106-108
reference architectures, 122-123
regression testing, 183-184
regulatory compliance. See compliance
regulatory concerns, 41
reliability, as QoS (quality of service), 70
Remaining Work graph, 88-90
reporting bugs. See bugs
reports, defining, 88
reports available (project management), 86-87
Actual Quality versus Planned Velocity graph, 97-98
Bug Rates graph, 93-94
Bugs by Priority graph, 95-97
Project Velocity graph, 90-91
Quality Indicators graph, 92-93
Reactivations graph, 95-96
Remaining Work graph, 88-90
Unplanned Work graph, 91-92
required documentation, tacit knowledge versus, 39
requirements, 50
development quality and, 135-137
Kano Analysis, 71-75
personas, 55-56
actors versus, 65-66
researching, 57-59
QoS (qualities of service), 67-71
scenarios, 56-57
customer validation of, 62-64
dissatisfiers, 66-67
in end-to-end story, 61-62
evolving, 64-65
exciters, 66
satisfiers, 66
storyboarding, 60-61
use cases versus, 65-66
user stories versus, 66
writing steps for, 59-60
specificity versus understandability, 53-55
tests against, 184-185
time frame for, 52-53
vision statements, 50-52
requirements analysis, scenario testing versus, 170
researching personas, 57-59
resource leaks, troubleshooting, 229-230
responsiveness, as QoS (quality of service), 69
risk, viewpoints of advocacy groups, 36-37
risk testing, 187-189
roles, job descriptions versus, 43
sampling (performance profiling), 152
Sarbanes-Oxley Act of 2002 (SOX), 3, 109
satisfiers, 66
Kano Analysis, 71-75
scalability, as QoS (quality of service), 69
customer validation of, 62-64
dissatisfiers, 66-67
in end-to-end story, 61-62
evolving, 64-65
exciters, 66
Kano Analysis, 71-75
satisfiers, 66
states of, 88
storyboarding, 60-61
testing, 169-177
use cases versus, 65-66
user stories versus, 66
writing steps for, 59-60
scheduling time for unplanned work, 228-229
scope creep, 227
SCRUM, product backlog, 12
SDM (System Definition Model), 129-130
security
architecture and, 127
as QoS (quality of service), 68
testing, 182-183
self-organization, prescribed organization versus, 43
service-oriented architecture. See SOA
shelvesets, 140
shelving (source control), 155
smells, 19
SOA(service-oriented architecture), 116-119
organizational boundaries and, 46
VSTS and, 124-127
SOAP, bug reporting analogy, 212-213
soap opera testing, 194-195
software. See projects
source control, development quality and, 135, 152-161
SOX (Sarbanes-Oxley Act of 2002), 3, 109
special cause variation, 81-82
specificity of requirements, 53-55
stack ranking, 33
stakeholders (iterative development), 31
stale tests, 236
static analysis, 138-139
storyboards, 60-61
strategic projects, 51
subjective data in bug reports, 213-214
supportability, as QoS (quality of service), 70
System Definition Model (SDM), 129-130
System Designer, 124-125
tacit knowledge, required documentation versus, 39
tasks
estimating. See estimating tasks
troubleshooting, 224-225
TDD (Test-Driven Development), 136-137
team builds, 157
team efficiency, 201-202
Team System, instrumenting daily activities, 18
technology adoption lifecycle, 73-74
templates, Process Template, 22
test lists (for BVTs), creating, 149
test run configurations, unit testing, 146-148
Test-Driven Development (TDD), 136-137
testability, as QoS (quality of service), 70
testing. See also bugs
bottleneck in, 236-240
configuration testing, 189-192
as discovery, 194-195
exploratory testing, 194
false confidence in, 195
“good enough” testing, 192-193
load testing, 177-182
questions to answer with, 169
amount of testing to do, 192-195
automated testing, 201
change testing, 183-184
delivering customer value, 169-177
gaps in testing, 184-189
production environment, 189-192
qualities of service (QoS), 177-183
team efficiency, 201-202
when to test, 196-200
regression testing, 183-184
risk testing, 187-189
scenario testing, 169-177
security testing, 182-183
TDD (Test-Driven Development), 136-137
tests pass, solution doesn’t work, 233-236
unit testing, 196
development quality and, 135, 141-152
troubleshooting, 231-232
value-up approach of, 166-169
Theory of Special Relativity (Einstein), 2
time boxes, 31
top-down estimation, 99
tracking work, 11-13
instrumenting daily activities, 17-21
work item databases, 13-17
transparency. See also tracking work
development quality and, 135, 160-161
of flow, 11-13
triage, 104-108
example, 104-106
iterations and, 108-109
red line, 106-108
triage committee, writing bug reports for, 211
troubleshooting with metrics warehouse
development practices, 229-233
testing bottleneck, 236-240
tests pass, solution doesn’t work, 233-236
underestimation, 223-230
usefulness of, 222-223
Turner, Richard, 28
underestimation, 223-230
understandability of requirements, 53-55
uninstallability, as QoS (quality of service), 70
unit testing, 196
development quality and, 135, 141-152
troubleshooting, 231-232
unmanaged code analysis, 139
unplanned work, capacity to handle, 228-229
Unplanned Work graph, 91-92
usability labs, validating scenarios in, 63-64
use cases, scenarios versus, 65-66
user experience, as QoS (quality of service), 69
user interface tests, 176-177
user stories, scenarios versus, 66
validating
architecture, 121
scenarios, 62-64
value-up approach, 5
to development, 134
flow, 8-10
transparency, 11-13
work-down approach versus, 10-11
ideas in, 245-247
in testing, 166-169
work-down approach versus, 6
variable data sets, unit testing, 146-147
variance
estimating tasks, 4
in project velocity graph, 90
variation, 81-82
velocity, quality versus, 97-98
version control. See source control
version skews, development quality and, 135, 152-161
virtual machines, configuration testing on, 190
vision statements, 50-52
VSTS
architecture in, 116
SOA (service-oriented architecture) and, 124-127
waterfall model, 30
web services, SOA and, 118
Web Services Description Language (WSDL), 118
web tests, in scenario testing, 172-175
Weinberg, Gerald, 43
Windows Server System Reference Architecture (WSSRA), 122
wireframes, 60-61
work item databases, 13-17
instrumenting daily activities, 17-21
associating with check-ins, 109-111
states of, 89-90
work-down approach, 4
flow versus, 10-11
value-up approach versus, 6
writing bug reports, 210-212
assessment data in, 216-218
objective data in, 214-216
plans in, 218
SOAP analogy, 212-213
subjective data in, 213-214
WSDL (Web Services Description Language), 118
WSSRA (Windows Server System Reference Architecture), 122