Chapter 23. Software Requirements

In the prior Team Skills, the features that were defined for the system were purposely left at a high level of abstraction so that:

  • We can better understand the shape and the form of the system by focusing on its features and how they fulfill user needs.

  • We can assess the system for completeness and consistency and its fit within its environment.

  • We can use this information to determine feasibility and to manage the scope of the system before making significant investments.

In addition, staying at a high level of abstraction kept us from making overly constraining requirements decisions too early, that is, before the people closest to the system implementation have their opportunity to add their perspective and value to the system definition. In Team Skill 5, Refining the System Definition, our discussions transition to elaborating the system features in detail sufficient to ensure that the design and coding activities result in a system that fully conforms to the user needs. In so doing, we drive to the next level of specificity and detail, and we create a richer, deeper requirements model for the system to be built. Of course, we also create more information to be managed, and we will have to be better organized to handle this additional detail.

The level of specificity needed in this next step depends on a number of factors, including the context of the application and the skills of the development team. In high-assurance software systems for medical equipment, aircraft avionics, or online trading, the level of specificity is appropriately high. The refinement process may include formal mechanisms for quality assurance, reviews, inspections, modeling, and the like. In systems with less catastrophic consequences of failure, the level of effort will be more modest. In those cases, the work involved is simply to make certain that the system definition is precise enough so as to be understood by all parties and yet provide an efficient development environment and a "high enough" probability of success. The focus is on pragmatics and economics, doing just enough requirements specification to make certain that the software developed is what the user wanted.

Just as there is no one right programming language for every application, there is no one right way to develop the more detailed specifications. Different environments call for different techniques, and the requirements managers and requirements writers will probably need to develop a mix of skills suited to various circumstances. We have applied a variety of techniques in a variety of circumstances, from fairly rigorous requirements documents, custom databases, or requirements repositories to use-case models and more formal methods. However, the locus of the effort has typically been on a natural-language specification, written clearly enough to be understandable by all stakeholders, customers, users, developers, and testers but specific enough ("Axis 4 shall have a maximum traverse speed of 1 meter/second") to allow for verification and demonstration of compliance. Before we begin to collect the system requirements, we will first consider the nature of the requirements you will need to discover and to define.

Definition of Software Requirements

InChapter 2, we started with this straightforward definition for a requirement, which is one of the following.

  • A software capability needed by the user to solve a problem to achieve an objective

  • A software capability that must be met or possessed by a system or a system component to satisfy a contract, standard, specification, or other formally imposed documentation

Software requirements are those things that the software does on behalf of the user or device or another system. The first place to look for software requirements is around the boundary of the system for the things that go "into" and "out of" the system: the system interactions with these users.

To do this, it's easiest to first think of the system as a black box and to think about things that you would have to define to fully describe what the black box does.

In addition to the inputs and outputs, you will also need to think about certain other characteristics of the system, including its performance and other types of complex behavior, as well as other ways in which the system interacts with its environment (Figure 23-1).

Using a similar approach, Davis (1999) determined that we need five major classes of things to fully describe the system:

  1. Inputs to the system—not only the content of the input but also, as necessary, the details of input devices and the form, look, and feel—protocol—of the input. As most developers are well aware, this area can involve significant detail and may be subject to volatility, especially for GUI, multimedia, or Internet environments.

    System elements

    Figure 23-1. System elements

  2. Outputs from the system— a description of the output devices, such as voice-output or visual display, that must be supported, as well as the protocol and formats of the information generated by the system.

  3. Functions of the system— the mapping of inputs to outputs, and their various combinations.

  4. Attributes of the system— such typical nonbehavioral requirements as reliability, maintainability, availability, and throughput, that the developers must taken into account.

  5. Attributes of the system environment— such additional nonbehavioral requirements as the ability of the system to operate within certain operating constraints, loads, and operating system compatibility.

We have worked with this categorization for a number of years and have found that it works quite well, as it helps one think about the requirements problem in a consistent and complete manner. Accordingly, we can determine a complete set of software requirements by defining

  • Inputs to the system

  • Outputs from the system

  • Functions of the system

  • Attributes of the system

  • Attributes of the system environment

In addition, we'll be able to evaluate whether a "thing" is a software requirement by testing it against this elaborated definition.

Relationship between Features and Software Requirements

Earlier, we spent some time exploring the "features" of a system. The features are simple descriptions of a desired and useful behavior. We can now see that there is a mapping relationship between features and software requirements. The Vision document cites features in the user's language. The software requirements, in turn, express those features in more detailed terms, using one or more specific software requirements that must be fulfilled by the developers in order to provide the features to the user. In other words, features help us understand and communicate at a high level of abstraction, but we probably can't describe the system and write code from those descriptions. They are at too high a level of abstraction for this purpose.

Software requirements, however, are specific. We can code from them, and they should be specific enough to be "testable"; that is, we should be able

Table 23-1. Requirements associated with particular Vision document features

Vision document

Software Requirements

Feature 63: The defect-tracking system will provide trending information to help the user assess project status.

Trending information will be provided in a histogram report showing time on the x-axis and the number of defects found on the y-axis.

SR63.2 The user can enter the trending period in units of days, weeks, or months.

SR63.3 An example trend report is shown in attached Figure 1.

to test a system to validate that it really does implement the requirement. For example, suppose we are developing a defect-tracking system for an assembly-line manufacturing organization or for a software development organization. Table 23-1 shows the relationship between one of the features identified in the Vision document and an associated set of requirements. This mapping, and the ability to trace between the various features and requirements, will form the backbone of a very important requirements management concept known as "traceability," a topic we'll discuss later.

The Requirements Dilemma: What versus How

As we have seen, requirements tell the developers what their system must do and must cover the issues of the system inputs, outputs, functions, and attributes, along with attributes of the system environment. But there's a lot of other information that the requirements should not contain. In particular, they should avoid stipulating any unnecessary design or implementation details, information associated with project management; and they should avoid information about how the system will be tested. In this way, the requirements focus on the behavior of the system, and they are volatile only to the extent that the behavior is volatile or subject to change.

Exclude Project Information

Project-related information, such as schedules, configuration management plans, verification and validation plans, budgets, and staffing schedules, are sometimes bundled into the set of requirements for the convenience of the project manager. In general, this must be avoided, since changes in this information, such as schedule changes, increase volatility and the tendency for the "requirements" to be out of date. (When the requirements are dated, they become less trustworthy and more likely to be ignored.) In addition, the inevitable debates about such things should be well separated from the discussion of what the system is supposed to do. There are different stakeholders involved, and they serve different purposes.

The budget could be construed as a requirement too; nevertheless, this is another type of information that doesn't fit our definition and therefore doesn't belong with the overall system or software requirements. The budget may turn out to be an important piece of information when the developers try to decide which implementation strategies they'll choose, because some strategies may be too expensive or may take too long to carry out. Nevertheless, they are not requirements; in a similar fashion, information describing how we'll know that the requirements have actually been met—test procedures or acceptance procedures—also don't meet the definition and therefore don't belong in the specs.

We usually find it convenient to "push the envelope" a little bit here. In many cases, it is probably useful for the requirement writer to give a "hint" as to a suitable test for the requirement. After all, the requirement writer had a specific behavior in mind for the requirement, and it's only reasonable to give as much help as possible.

Exclude Design Information

The requirements should also not include information about the system design or architecture. Otherwise, you may accidentally have restricted your team from pursuing whatever design options make the most sense for your application. ("Hey, we have to design it that way; it's in the requirements.")

Whereas the elimination of project management and testing details from the list of requirements is fairly straightforward, the elimination of design/implementation details is usually much more difficult and much more subtle. Suppose, for example, that the first requirement in Table 23-1 had been worded like this: "SR63.1 Trending information will be provided in a histogram report written in Visual Basic, showing major contributing causes on the x-axis and the number of defects found on the y-axis" (see Figure 23-2).

Pareto diagram

Figure 23-2. Pareto diagram

Although the reference to Visual Basic appears to be a fairly blatant violation of the guidelines we've recommended (because it doesn't represent any input, output, function, or behavioral attribute), it's useful to ask: "Who decided to impose the requirement that the histogram be implemented in Visual Basic, and why was that decision made?" Possible answers to that question might be:

  • One of the technically oriented members of the group defining the Vision document decided that Visual Basic should be specified because it is the "best" solution for the problem.

  • The user may have specified it. Knowing just enough about technology to be dangerous, the user, worried that the technical people may adopt another technology, one that's more expensive or less readily available, knows that VB is readily available and relatively cheap and wants that technology to be used.

  • A political decision within the development organization may have mandated that all applications will be developed with Visual Basic. In an effort to ensure compliance and to prevent its policies from being ignored, management insists that references to Visual Basic be inserted whenever possible into requirements documents.

If a technical developer decides to insert a reference to Visual Basic because of an arbitrary preference for the language, it obviously has no legitimate place in the list of requirements. If the user provided the requirement, things get a little stickier. If the customer refuses to pay for a system unless it's written in Visual Basic, the best course of action is to treat it like a requirement, although we will place it in a special class, called design constraints, so that it is separated from the normal requirements, which influence only the external behavior. Nevertheless, it's an implementation constraint that has been imposed on the development team. (By the way, if you think that this example is unrealistic, consider the common requirement imposed by the U.S. Defense Department on its software contractors until the late 1990s to build systems using Ada.)

Meanwhile, the discussion of Visual Basic in this example may have obscured a more subtle and perhaps more important requirement analysis: Why does the trending information have to be shown in a histogram report? Why not a bar chart, a pie chart, or another representation of the information? Furthermore, does the word "report" imply a hard-copy printed document, or does it also imply that the information can be displayed on a computer screen? Is it necessary to capture the information so that it can be imported into other programs or uploaded to the corporate extranet? The feature described in the Vision document can almost certainly be fulfilled in various ways, some of which have very definite implementation consequences.

In many cases, the description of a problem from which a requirement can be formulated is influenced by the user's perception of the potential solutions that are available to solve the problem. The same is true of the developers who participate with the user to formulate the features that make up the Vision document and the requirements. As the old adage reminds us, "If your only tool is a hammer, all your problems look like a nail." But we need to be vigilant about unnecessary and unconscious implementation constraints creeping into the requirements, and we need to remove such constraints whenever we can.

More on Requirements versus Design

So far, we have treated software requirements, design decisions, and design constraints as if they were distinct entities that can be clearly differentiated. That is, we have stated or implied that

  • Requirements precede design.

  • Users and customers, because they are closest to the need, make requirements decisions.

  • Technologists make design decisions because they are best suited to pick, among the many design options, which option is best suited to meet the need.

This is a good model, and it is the right starting point for a requirements management philosophy. Davis (1993) calls this the "what versus how" paradigm, where "what" represents the requirements, or "what" the system is to do, and "how" represents the design that is to be implemented to achieve this objective.

We've presented the story in this fashion for a reason. It is best to understand requirements before design, and most design constraints ("use XYZ class library for database access") are important design decisions recorded in the requirements assets so that we can ensure that we achieve them for a contractual or, perhaps quite legitimate, technical reason.

If we couldn't make these classifications at all, the picture would be very muddled, and we couldn't differentiate requirements from design. Further, we would no longer know who is responsible for what in the development process. Even worse, our customers would dictate design, and our designers would dictate requirements.

But a subtle and yet serious complication underlies this discussion and belies the simple paradigm we've presented. Returning to our case study for example, if the team makes a design decision, such as selection of a PC technology to run in the HOLIS CCU subsystem, it's likely to have some external impact on the user. For example, a prompt or log-on screen will show up somewhere in the user's world. Better yet, we will probably want to take advantage of some user input capabilities of the OS, and those class libraries will certainly exhibit external behaviors to the user. (Note to the techies among you: Yes, we could hide it, but that's beside the point.)

Given the definitions we've provided in this chapter, the question becomes: Once the impact of a design decision causes external behavior seen by the user, does that same decision, which now clearly affects "input or output from the system," now become a requirement? One could argue that the correct answer is "yes," or "no," or even "it doesn't really matter," based on your individual interpretation of the definitions and analysis we've provided so far. But that makes light of a very important matter, as an understanding of this issue is critical to an understanding of the nature of the iterative process itself. Let's take a closer look.

Iterating Requirements and Design

In reality, the requirements versus design activities must be iterative. Requirements discovery, definition, and design decisions are circular. The process is a continual give and take, in that

Iterating Requirements and Design

Occasionally, discovery of a new technology may cause us to throw out a host of assumptions about what the requirements were supposed to be; we may have discovered an entirely new approach that obviates the old strategy. ("Let's throw out the entire client/data access/GUI module and substitute a browser-based interface.") This is a prime, and legitimate, source of requirements change.

This process is as it should be; to attempt to do otherwise would be folly. On the other hand, there is grave danger in all of this, for if we do not truly understand the customer's needs and the customer is not engaged actively in the requirements process—and yes, in some cases, even understanding our design-related activities— the wrong decision might be made. When properly managed, this "continual reconsideration of requirements and design" is a truly fantastic process, as technology drives our continually improving ability to meet our customer's real needs. That's the essence of what effective and iterative requirements management is all about. But when improperly managed, we continually "chase our technology tail," and disaster results. We never said it would be easy.

A Further Characterization of Requirements

The preceding discussions on requirements suggested that various "kinds" of requirements exist. Specifically, we have found it useful to think about three "types" of requirements, as shown in Figure 23-3:

  • Functional software requirements

  • Nonfunctional software requirements

  • Design constraints

Types of requirements

Figure 23-3. Types of requirements

Functional Software Requirements

As you might expect, functional requirements express how the system behaves. These requirements are usually action oriented ("When the user does x, the system will do y.") Most products and applications, conceived to do useful work, are rich with software functional requirements. Software is used to implement the majority of the functionality.

When you are defining functional requirements, you should seek a good balance between being too specific in stating a requirement and being too general or too ambiguous. For example, it is not usually helpful to have a general functional requirement stated in the form "When you push this button, the system turns on and operates." On the other hand, a requirement statement that takes up several pages of text is probably too specific, but it may be the right thing to do in very special cases. We'll return to this matter in Chapter 26.

We have found that most functional requirements can be stated in simple declarative statements or in the use case form we'll describe in the next chapter. Experience has shown us that a one- or two-sentence statement of a requirement is usually the best way to match a user need with a level of specificity that a developer can deal with.

Nonfunctional Software Requirements

So far in this chapter, most of the examples for requirements have involved behavioral, or functional, requirements of a system, focusing on inputs, outputs, and processing details. The functional requirements tell us how the system must behave when presented with certain inputs or conditions.

But that's not enough to fully describe the requirements of a system. We must also consider things that Grady (1992) called "nonfunctional requirements:"

  • Usability

  • Reliability

  • Performance

  • Supportability

These requirements are used most typically to express some of the "attributes of the system" or "attributes of the system environment" elements of our elaborated definition. This convenient classification helps us to understand more about the system we are to build. Let's look at each in further detail.

Usability . It's important to describe the ease with which the system can be learned and operated by the intended users. Indeed, we may have to identify various categories of users: beginners, "normal" users, "power" users, illiterate users, users who are not fluent in the native language of the "normal" users, and so on. If you expect your customer to review and to participate in these discussion—and you'd better—you should realize that whatever requirements you write in this area will be written in a natural language; you shouldn't expect to see a finite state machine description of usability!

Since usability tends to be in the eye of the beholder, how do we specify such a fuzzy set of requirements? Some suggestions follow.

  • Specify the required training time for a user to become marginally productive—able to accomplish simple tasks—and operationally productive—able to accomplish the normal day-to-day tasks. As noted, this may need to be further described in terms of novice users, who may have never seen a computer before, normal users, and power users.

  • Specify measurable task times for typical tasks or transactions that the end user will be carrying out. If we're building a system for order entry, it's likely that the most common tasks carried out by end users will be entering, deleting, or modifying an order and checking on the status of an order. Once the users have been trained how to perform those tasks, how long should it take them to enter a typical order? A minute? Five minutes? An hour? Of course, this could be affected by performance issues in the technical implementation (such as modem speed, network capacity, RAM, and CPU power, that collectively determine the response time provided by the system, but task-performance times are also strongly affected by the usability of the system, and we should be able to specify that separately.

  • Compare the usability of the new system with other state-of-the-art systems that the user community knows and likes. Thus, the requirement might state, "The new system shall be judged by 90 percent of the user community to be at least as usable as the existing XYZ system." Remember, this kind of requirement, like all other requirements, should be verifiable; that is, we must describe the requirement in such a way that the users can test and verify the usability against the criteria we've established.

  • Specify the existence and required features of online help systems, wizards, tool tips, user manuals, and other forms of documentation and assistance.

  • Follow conventions and standards that have been developed for the human-to-machine interface. Having a system work "just like what I'm used to" can be accomplished by following consistent standards from application to application. For example, you can specify a requirement to conform to common usability standards, such as IBM's CUA (Common User Access) standards, or the Windows applications standards published by Microsoft.

Examples of usability breakthroughs in the computer world include the difference between command line interfaces, exemplified by DOS (shudder!) and UNIX systems, versus the GUI interfaces, exemplified by Macintosh and Windows systems. It is clear that the GUI interfaces were instrumental in making computers easier to use by the great masses of nontechnical users. Another example is the Internet browser, which gave a "face" to the World Wide Web and radically accelerated the adoption of the Internet for the average user.

Several interesting attempts to strengthen the somewhat fuzzy notion of usability have been made. Perhaps the most interesting effort has resulted in the "User's Bill of Rights" (Karat 1998). A recent version of the bill contains ten key points:

  1. The user is always right. If there is a problem with the use of the system, the system is the problem, not the user.

  2. The user has the right to easily install and uninstall software and hardware systems without negative consequences.

  3. The user has a right to a system that performs exactly as promised.

  4. The user has a right to easy-to-use instructions (user guides, online or contextual help, error messages) for understanding and utilizing a system to achieve desired goals and recover efficiently and gracefully from problem situations.

  5. The user has a right to be in control of the system and to be able to get the system to respond to a request for attention.

  6. The user has the right to a system that provides clear, understandable, and accurate information regarding the task it is performing and the progress toward completion.

  7. The user has a right to be clearly informed about all system requirements for successfully using software or hardware.

  8. The user has a right to know the limits of the system's capabilities.

  9. The user has a right to communicate with the technology provider and receive a thoughtful and helpful response when raising concerns.

  10. The user should be the master of software and hardware technology, not vice versa. Products should be natural and intuitive to use.

Note that some of the topics covered in the Bill of Rights are essentially un-measurable and are probably not good candidates for requirements per se. On the other hand, it seems clear that the bill should be useful to you as a starting point in developing questions and defining requirements for the usability of the proposed product.

Reliability . Of course, nobody likes bugs, defects, system failures, or lost data, and in the absence of any reference to such phenomena in the requirements, the user will naturally assume that none will exist. But in today's computer-literate world, even the most optimistic user is aware that things do go wrong. Thus, the requirements should describe the degree to which the system must behave in a user-acceptable fashion. This typically includes the following issues:

  • Availability. . The system must be available for operational use a specified percentage of the time. In the extreme case, the requirement(s) might specify "nonstop" availability, that is, 24 hours a day, 365 days a year. It's more common to see a stipulation of 99 percent availability or a stipulation of 99.9 percent availability between the hours of 8 a.m. and midnight. Note that the requirement(s) must define what "availability" means. Does 100 percent availability mean that all of the users must be able to use all of the system's services all of the time?

  • Mean time between failures (MTBF). . This is usually specified in hours, but it also could be specified in days, months, or years. Again, this requires precision: The requirement(s) must carefully define what is meant by a "failure."

  • Mean time to repair (MTTR). . How long is the system allowed to be out of operation after it has failed? A range of MTTR values may be appropriate; for example, the user might stipulate that 90 percent of all system failures must be repairable within 5 minutes and that 99.9 percent of all failures must be repairable within 1 hour. Again, precision is important: The requirement(s) must clarify whether "repair" means that all of the users will once again be able to access all of the services or whether a subset of full recovery is acceptable.

  • Accuracy. . What precision is required in systems that produce numerical outputs? Must the results in a financial system, for example, be accurate to the nearest penny or to the nearest dollar?

  • Maximum bugs, or defect rate. . This is usually expressed in terms of bugs/KLOC (thousands of lines of code), or bugs per function-point.

  • Bugs per type. . This is usually categorized in terms of minor, significant, and critical bugs. Definitions are important here, too: The requirement(s) must define what is meant by a "critical" bug, such as complete loss of data or complete inability to use certain parts of the functionality of the system.

In some cases, the requirements may specify some "predictor" metrics for reliability. A typical example of this is the use of a "complexity metric," such as the cyclomatic complexity metric, which can be used to assess the complexity—and therefore the potential "bugginess"—of a software program.

Performance . Performance requirements usually cover such categories as

  • Response time for a transaction: average, maximum

  • Throughput: transactions per second

  • Capacity: the number of customers or transactions the system can accommodate

  • Degradation modes: what is the acceptable mode of operation when the system has been degraded

If the new system has to share hardware resources with other systems or applications, it may also be necessary to stipulate the degree to which the implementation will make "civilized" use of such scarce resources as the CPU, memory, channels, disk storage, and network bandwidth.

Supportability . Supportability is the ability of the software to be easily modified to accommodate enhancements and repairs. For some application domains, the likely nature of future enhancements can be anticipated in advance, and a requirement could stipulate the "response time" of the maintenance group for simple enhancements, moderate enhancements, and complex enhancements.

For example, suppose that we are building a new payroll system; one of the many requirements of such a system is that it must compute the government withholding taxes for each employee. The user knows, of course, that the government changes the algorithm for this calculation each year. This change involves two numbers: instead of withholding X percent of an employee's gross salary up to a maximum of $P, the new law requires the payroll system to withhold Y percent up to a maximum of $Q. As a result, a requirement might say, "Modifications to the system for a new set of withholding tax rates shall be accomplished by the team within 1 day of notification by the tax regulatory authority."

But suppose that the tax authority also periodically introduced "exceptions" to this algorithm: "For left-handed people with blue eyes, the withholding tax rate shall be Z percent, up to a maximum of $R." Modifications of this kind would be more difficult for the software people to anticipate; although they might try to build their system in as flexible a manner as possible, they would still argue that the modification for left-handed employees falls into the category of "medium-level" changes, for which the requirement might stipulate a response time of 1 week.

But suppose that at the outset of the project, the manager of the payroll department also said, "By the way, it's possible that we'll expand our operation overseas, in which case, the withholding tax algorithm would have to be adjusted to reflect the current laws in France and Germany and maybe Hong Kong, too." Assuming that such a "requirement" made any sense at all, it could probably be stated only in terms of goals and intentions; it would be difficult to measure and verify such a requirement.

What the requirement statement can do, in order to increase the chances that the system will be supportable in the manner just described, is stipulate the use of certain programming languages, database management system (DBMS) environments, programming tools, maintenance routines, programming styles and standards, and so on. (In this case, these really become design constraints, as we'll see.) Whether this does produce a system that can be maintained more easily is a topic for debate and discussion, but at least we can get closer to the goal.

Design Constraints

The third class of requirements, design constraints, typically impose limitations on the design of the system or the processes we use to build a system. For example,

  • Usually, a requirement allows for more than one design option; a design is a conscious choice among options. Whenever possible, we want to leave that choice to the designers rather than specifying it in the requirements, for they will be in the best position to evaluate the technical and economic merits of each option. Whenever we do not allow a choice to be made ("Use Oracle DBMS"), the design has been constrained, and a degree of flexibility and development freedom has been lost.

  • A requirement that is imposed on the process of building software ("Program in VB," or use "XYZ class library") is a design constraint.

As illustrated with the preceding Visual Basic example, there may be many such sources and rationales, and the designers may have to accept them whether they like them or not. But it's important to distinguish them from more conventional requirements, for many of the constraints may be arbitrary, political, or subject to rapid technological change and might thus be subject to renegotiation at a later point.

We'll define design constraints as

restrictions on the design of a system, or the process by which a system is developed, that do not affect the external behavior of the system but that must be fulfilled to meet technical, business, or contractual obligations.

Design constraints can also be found in the developmental infrastructure immediately surrounding the system to be developed. These usually include

  • Operating environments: "Write the software in Visual Basic."

  • Compatibility with existing systems: "The application must run on both our new and old platforms."

  • Application standards: "Use the class library from Developer's Library 99-724 on the corporate IT server."

  • Corporate "best practices" and standards: "Compatibility with the legacy data base must be maintained." "Use our C++ coding standards."

Another important source of design constraints is the body of regulations and standards under which the project is being developed. For example, the development of a medical product in the United States is subject to a significant number of Food and Drug Administration (FDA) standards and regulations, imposed on not only the product but also the process by which the product is developed and documented. Typical regulatory design constraints might include regulations and standards from the following:

  • Food and Drug Administration (FDA)

  • Federal Communications Commission (FCC)

  • Department of Defense (DOD)

  • International Organization for Standardization (ISO) (No, this is not an error. ISO is a short, language-independent form, not an acronym.)

  • Underwriters Laboratory (UL)

Typically, the body of regulation imposed by these types of design constraints is far too lengthy to incorporate directly into your requirements. In most cases, it is sufficient to include the design constraints by reference into your package. Thus, your requirements might appear in the form: "The software shall fail safely per the provisions of TüV Software Standard, Sections 3.1-3.4."

Incorporation by reference has its hazards, however. Where necessary, you should be careful to incorporate specific and relevant references instead of more general references. For example, a single reference of the form "The product must conform to ISO 601" effectively binds your product to all of the standards in the entire document. As usual, you should strive for the "sweet spot" between too much specificity and not enough.

Almost all projects will have some design constraints. Generally, the best way to handle them is to follow these guidelines.

  • Distinguish them from the other requirements. For example, if you identified other software requirements with a tag, such as "SR," you might consider using "DC" for design constraints. You might be tempted to distinguish between true design constraints and regulatory constraints, but we have found that this distinction is seldom useful, and it can impose an unacceptable maintenance burden.

  • Include all design constraints in a special section of your collected requirements package, or use a special attribute so that they can be readily aggregated. That way, you can easily find them and review them when the factors that influenced them change.

  • Identify the source of each design constraint. By doing so, you can use the reference later to question or to revise the requirement. "Oh, this came from Bill in marketing. Let's go see if we can talk to him about this constraint." This would be a good time to supply a specific bibliographic reference in the case of regulatory standard references. That way, you can find the standard more easily when you need to refer to it later.

  • Document the rationale for each design constraint. In effect, write a sentence or two explaining why the design constraint was placed in the project. This will help remind you later as to what the motive was for the design constraint. In our experience, almost all projects eventually ask, "Why did we put this constraint in there?" By documenting the rationale, you will be able to more effectively deal with the design constraints in the later stages of the project when it (inevitably) will become an issue.

Are Design Constraints True Requirements?

You could argue that design constraints are not true software requirements because they do not represent one of the five system elements in our elaborated definition. But when a design constraint is elevated to the level of legitimate business, political, or technical concern, it does meet our definition of a requirement as something necessary to "satisfy a contract, standard, specification, or other formally imposed documentation."

In those cases, it's easiest to treat the design constraint just like any other requirement and to make certain that the system is designed and developed in compliance with that design constraint. However, we should always strive to have as few design constraints as possible, since their existence may often restrict our options for implementing the other requirements, those that directly fulfill a user need.

Using Parent-Child Requirements to Increase Specificity

We have found that many projects will benefit from the use of parent-child requirements as a tool for augmenting certain basic requirements. We view a parent-child requirement as an amplification of the specificity expressed in a parent requirement.

Let's consider an example. This time, we'll use a hardware example to illustrate the point. Suppose that you are developing an electronic device intended to work off standard electrical power. That is, the user is expected to plug the device into a wall outlet. The question arises, "How shall we specify the power requirements of the device?"

A perfectly natural response might be to include a product requirement that says, "The device shall operate off standard North American electrical power." But what does this mean? Your engineers will immediately besiege you for details on voltages, currents, frequencies, and so on. Of course, you could rewrite the requirement to include all of the needed details, but you will probably find that including all of the engineering details has obscured the original intent of the requirement. After all, you just want the device to work when it's plugged into a wall outlet!

In this case, you might wish to create some requirements to specify voltage, current, frequency, and so on. These requirements should be thought of as "children" of the parent requirement; indeed, we will frequently refer to parent-child relationships in a hierarchical requirement structure. Thus, you might find that specifying the electrical power needs for the device will appear as follows:

  • Parent: . The device shall operate off standard North American power.

    • Child 1: . The device shall operate in a voltage range of xxx–yyy volts AC.

    • Child 2: . The device shall require not more than xxx AC amperes for correct operation.

    • Child 3: . The device shall operate within specification over an input power frequency range of xx–yy hertz.

Parent-child requirements give you a very flexible way to enhance and to augment your specification while simultaneously controlling the depth of detail presented. In our example, it becomes straightforward to present the top-level specification in a way that is easily understandable by the users. At the same time, the detailed "child" specification can be easily inspected by the implementers to make sure that they understand all of the implementation details.

You can extend this notion for cases that require further amplification. For example, it is easy to imagine a case in which the "child" requirement becomes the "parent" requirement to a further level of detail. That is, you might wish to extend the hierarchy further and to detail the product needs as follows:

  • Parent:

    • Child 1:

      • Grandchild 1:

      • Grandchild 2:

But we want to insert a note of caution here. Although we have found the concept of parent-child requirement to be extremely useful, you must guard against adding too many hierarchical levels of detail, simply because you get bogged down in so many microscopic details that you lose sight of the main user objective. We have found that most projects work quite well with only one sublevel of detail. On occasion, you might find it useful to move to two sublevels of detail—the "child" and the "grandchild"—but rarely is it useful to go below that level of detail.

Organizing Parent-Child Requirements

On balance, we have found that the best plan is to consider the child requirements to be no different from the parent requirements, and you should plan on including them in the main requirements package.

Requirements readers can most easily relate the requirements back to the parent requirement if the identification of child requirements follows a logical pattern of identification based on the parent requirement's identification. For example, suppose that software requirement SR63.1 from Table 23-1 has one or more child requirements. A natural identification scheme for the child requirements would be to identify them as SR63.1.1, SR63.1.2, SR63.1.3, and so on. A hierarchical view of Table 23-1 might then appear as follows:

  • Feature 63

    • SR63.1

      • SR63.1.1

      • SR63.1.2

      • SR63.1.3

    • SR63.2

When managing a mixed software requirement/child requirement environment, a helpful feature is the ability to expand/collapse the total set of requirements so that you can view either the parents alone or the parents with the children.

Looking Ahead

Now that we have examined the nature of requirements, we will turn to techniques for capturing and organizing them. Our next chapter will focus on a powerful technique to capture requirements. Subsequent chapters will focus on the issue of organizing the collection.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset