The specification of a software product is a description of the functional requirements that the product must satisfy. It is not common to study software specifications in the context of software testing; we do so in this book for a variety of reasons, which are as follows:
With these premises in mind, we discuss the topic of software specifications as an important aspect of the study of software testing. While there is a wide range of specification languages that are used in research circles, some of which are relatively widely used in industry, we choose not to commit to any language, but rather to use mathematical notation, and focus on models rather than languages.
The specification of a software product is a description of the properties that the product must have to fulfill its purpose. The specification is usually derived by identifying all the relevant stakeholders of the (existing or planned) software product, eliciting the requirements that they expect the product to meet, formulating and combining these requirements, and compiling them into a cohesive document. While specifications typically pertain to functional and operational requirements, we focus primarily on functional requirements in this book, that is, requirements that pertain to the input/ output behavior of the software product.
As a product, a specification must meet two conditions, which are as follows:
As a process (the process of identifying stakeholders, eliciting requirements, compiling them, etc.), a specification must meet two conditions, which are as follows:
A specification that is deemed to be complete and minimal is said to be valid. In Sections 4.2 and 4.3, we will have an opportunity to discuss how to ensure the validity of specifications; in the remainder of this section, we introduce elements of relational mathematics, which we use throughout the book, starting in this chapter.
We represent sets using a programming-like notation, by introducing variable names and associated data type (sets of values). For example, if we represent set S by the variable declarations
x: X; y: Y; z: Z,
then S is the Cartesian product X × Y × Z. Elements of S are denoted in lower case s and are triplets of elements of X, Y, and Z. Given an element s of S, we represent its X component by x(s), its Y component by y(s), and its Z component by z(s). A relation on S is a subset of the Cartesian product S×S; given a pair (s,s′) in R, we say that s′ is an image of s by R. Special relations on S include the universal relation L=S × S, the identity relation I={(s,s′)| s′=s}, and the empty relation φ={}. To represent relations graphically, we use the Cartesian plane in which set S is represented on the abscissas (for s) and the ordinates (for s′). Using this device, we represent an arbitrary relation on S, as well as L, I, and φ in Figure 4.1.
Because a relation is a set, we can apply to relations all the operations that are applicable to sets, such as union (∪), intersection (∩), difference ( ∕ ), and complement . In addition, we define the following operations:
Figure 4.2 depicts a graphic illustration of a relation, its complement, and its converse.
Given a set A (subset of S), we define three relations of interest, which are as follows:
Figure 4.3 represents, for set A (a subset of S), the vector, inverse vector, and monotype defined by A.
Given two relations R and R′, we let the product of R by R′ be denoted by R•R′ (or RR′, if no ambiguity arises) and defined by . The Figure 4.4 illustrates the definition of relational product.
If we denote the vector and the inverse vector defined by A by, respectively, ω(A) and μ(A), then the following identities hold, by virtue of the relevant definition:
Vectors are a convenient (relational) way to represent sets, when we want everything to be a relation. Hence, for example, the domain of relation R can be represented by the vector RL, and the range of relation R can be represented by the inverse vector LR (Fig. 4.5).
Note that we can represent the prerestriction and the postrestriction of a relation to a set, say A, using the vector and inverse vector defined by A, as shown in the Figure 4.6.
Among the properties of relations, we cite the following:
The Figure 4.7 illustrates some of these properties.
While the study of relations may sound alien to software testing, relations can be used to model specifications, which are an important part of software testing. They are the basis for the design and implementation of oracles, and more generally, they serve to define what is program correctness, what is a fault, what is fault removal, and what is relative correctness, all of which are essential aspects of software testing.
If one asks junior computer science (CS) students in a programming course to write a C++ function that reads a real number and compute its square root, they would rush immediately to their computers to write code and run it; and yet this problem statement, despite being simple and short, leaves many questions unanswered. Consider that this statement may be interpreted in a wide range of manners, leading to a wide range of possible specifications, where space S is defined to be the set of real numbers:
We could go on and on. This simple example highlights two lessons: First, the importance of precision in specifying program requirements and second, the premise that relations enable us to achieve the required precision.
As a second illustrative example, consider the following requirement pertaining to space S defined by an array a[1..N] of some type, where N is greater than or equal to 1, a variable x of the same type, and an index variable k, which we use to address array a: Search x in a and place its index in k . Again, this simple requirement lends itself to a wide range of interpretations, some of which we write as follows, along with their relational representation:
Note that F1 can be written simply as since the clause is a logical consequence of . We draw the reader’s attention to the importance of carefully watching which variables are primed and which are unprimed in a specification. By writing F1 as we did, we mean that the final value of k points to a location in the original array a where the original value of x is located. As written, this relation specifies a search program. If, instead of F1, we had written the specification as follows:
then it would be possible to satisfy this specification by the following simple program:
{k=1; x=a[1];}
If, instead of F1, we had written the specification as follows:
then it would be possible to satisfy this specification by the following simple program:
{k=1; a[1]=x;}
If, instead of F1, we had written the specification as follows:
then it would be possible to satisfy this specification by the following simple program:
{k=1; x=0; a[1]=0;}
Neither of these three programs is performing a search of variable x in array a.
When we consider specifications on a given space S, we find it natural to order them according to the strength of their requirement, that is, some of them impose more requirements than others. Let us, for the sake of illustration, consider the specifications of the search program written in the previous section:
It is natural/ intuitive to consider that F2 is stronger than F1 since the latter would be satisfied with k′ pointing to any occurrence of x in a, while the former requires that k′ points to the smallest such occurrence. Also, it is natural to consider that F3 is stronger than F1 since the latter requires that a and x be preserved whereas the former does not; for the same reason, F4 is stronger than F2. On the other hand, F5 can be considered stronger than F1 since the latter makes provisions for the case when x is not in a, whereas the former does not; for the same reason, we can consider that F6 is stronger than F2, that F7 is stronger than F3, and that F8 is stronger than F4. These ordering relations are depicted in Figure 4.8.
We notice that F2 and F5 are both considered stronger than F1, but while the former is a subset of F1, the latter is a superset thereof. There appears to be two (nonexclusive) ways for a specification R to be considered stronger than a specification R′: by having a larger domain, and by having fewer images for elements in the common domain. Whence the following definition.
Henceforth, we use the term refines to refer to the property of being a “stronger’” specification. We admit without proof that the relation refines is a partial ordering between specifications, that is, that it is reflexive, antisymmetric, and transitive. We refer to this relation as the refinement ordering. The graph in Figure 4.9 shows two relations, R′ and R″, that refine relation R.
Let space S be defined by a real array a[1..N] and a real variable x and an index variable k. We are interested to write a relation to reflect the following requirement:
Place in x the largest value of a and in k the smallest index where the largest value occurs.
For example, if array a has the following values,
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
8.9 | 9.1 | 5.2 | 9.4 | 9.4 | 0.12 | 4.3 | 8.2 | 9.4 | 3.1 | 2.6 | 9.4 |
then we want x′to be equal to 9.4 and k′ to be equal to 4. Because it is too difficult to specify all the requirements at once, we consider them one by one:
Then we compute the overall specification as the intersection of M1, M2, M3, and M4, that is,
As a second example, we consider space S made up of three nonnegative real variables x, y, z, and a variable t that represents the enumerated type: {notri, scalene, isosceles, equilateral, right, rightisoceles}. We consider the following requirement:
Given that x, y, and z represent the sides of a triangle, place in t the class of the triangle represented by x, y, and z from the set {notri, scalene, isosceles, equilateral, rightisoceles, right}. We assume that the label “isosceles” is reserved for triangles that are isosceles but not equilateral and that the label ‘right’ is reserved for triangles that are right but not isosceles.
To write this specification, we write one relation for each type of triangle, then form their union. To this effect, we define the following predicates in triplets of real numbers:
Using these predicates, we define the following relations:
Using these relations, we form the relational specification of the triangle classification problem:
From these two examples, we want to discuss the question of how do we generate a complex specification from simple/ elementary specifications?
The question that we wish to address then is as follows: Given two relations R1 and R2 on space S, how do we compose them into a specification that captures all the requirements of R1 and all the requirements of R2 (for the sake of completeness) and nothing more (for the sake of minimality), assuming that the domains of R1 and R2 are neither (necessarily) identical nor (necessarily) disjoint? Consider the following graph depicting the configuration of dom(R1) and dom(R2) (Fig. 4.10).
If we want R to capture all the specification information of R1 and all the specification information of R2, then R has to be identical to R1 outside the domain of R2 and identical to R2 outside the domain of R1, and for each element of the intersection of the domains of R1 and R2, it has to be identical to the intersection of R1 and R2. This justifies the following definition.
This formula is a mere relational representation of the Figure 4.10, depicting how can be derived from R1 and R2 The following proposition, which we present without proof, gives an important property of the join operator.
The Figure 4.11 shows an example of two relations R1 and R2 that satisfy the compatibility condition and shows their join.
To check whether R1 and R2 verify the compatibility condition, we compute the following:
Hence the condition is verified. Since R1 and R2 verify the compatibility condition, their join ( ) represents the least refined relation that refines them both. On input 1 (outside the domain of R2), R behaves as R1; on input 6 (outside the domain of R1), R behaves like R2; and on inputs {2,3,4,5} (the intersection of the domains of R1 and R2), R behaves like the intersection of R1 and R2 (which includes {(2,2),(3,3),(4,4),(5,5)}).
As a second example, consider the following relations R1 and R2 (Fig. 4.12).
In this case, R1 and R2 do not satisfy the compatibility condition since 4 belongs to the domain of each one of them but does not belong to the domain of their intersection. Indeed, it is not possible to find a relation that refines them simultaneously since R1 assigns images 5 and 6 to 4, where R2 assigns images 2 and 3; there is no value that R may assign to 4 to satisfy both R1 and R2.
The software engineering literature is replete with examples of software projects that fail, not because programmers do not know how to write code or how to test it, but rather because analysts and engineers fail to write valid specifications, that is, specifications that capture all the relevant requirements (for the sake of completeness) and nothing but relevant requirements (for the sake of minimality). Consequently, it is important to validate specifications for completeness and minimality and to invest the necessary resources to this effect before proceeding with subsequent phases of the software lifecycle. In this section, we briefly and cursorily discuss the process of specification validation, in the narrow context of the relational specifications that we introduce in this chapter, with the modest goal of giving the reader some sense of what it may mean to validate a specification.
Let us start with a very simple illustrative example: We consider space S defined by natural variables x and y, and we consider the following requirement:
Increase x while preserving the sum of x and y.
We submit the following relations as possible specifications for this requirement:
We invite the reader to ponder the following questions: which of these specifications is complete; and for those that are complete, which are minimal. The following table shows our answers to these questions (if a specification is not complete, it makes no sense to check its minimality):
Specification | Complete? | Minimal? | Valid? |
No | N/A | No | |
No | N/A | No | |
Yes | No | No | |
No | N/A | No | |
Yes | Yes | Yes | |
Yes | Yes | Yes | |
Yes | No | No | |
Yes | No | No |
Specifications N5 and N6 are complete and minimal (and are identical, in fact); they specify that x must be increased while preserving the sum of x and y. Specifications N1 and N2 are not complete because they do not stipulate that x must increase (they allow it to stay constant); and specification N4 is not complete because it fails to specify that the sum of x and y must be preserved. Specifications N3, N7, and N8 are complete but not minimal because they specify by how much x must be increased, which is not stipulated in the requirement.
In the example aforementioned, we wrote the specifications on the basis of the proposed requirement (to Increase x while preserving the sum of x and y) and we judged the completeness and minimality of candidate specifications by considering the same source, that is, the proposed requirement. If the same person or group is tasked with generating the candidate specifications and judging their validity (completeness and minimality), then the same biases that cause the person to write invalid specifications may cause him/ her to overlook the invalidity of their specification. The only way to ensure a measure of confidence in the validation of the specification is to separate the team that generates the specification from the team that validates it. To this effect, we propose the following two-team, two-phase approach:
Activity Phase | Specification Generation | Specification Validation |
Specification Generation | Generating the specification from sources of requirements | Generating validation data from the same sources of requirements |
Specification Validation | Updating the specification according to feedback from the validation team | Testing the specification against the validation data generated earlier |
It remains to discuss the following: what form does the validation data take, and how does one test a specification against the generated validation data. The answers to these questions are given in the following definitions.
Implicit in this definition is that a good completeness property is one that has the potential to detect an incomplete specification; in other words, a good completeness property is one that the validation team believes the specifier team may have overlooked.
Implicit in this definition is that a good minimality property is one that has the potential to detect a nonminimal specification; in other words, a good minimality property is one that the validation team believes the specifier team may have inadvertently recorded in the specification.
Completeness and minimality are not absolute attributes but rather relative with respect to selected completeness and minimality properties, as provided in the following definition.
We admit without proof that if R refines all of V1, V2, …, Vn, then it refines their join. Hence, the range of valid specifications with respect to completeness properties and minimality properties is represented in the Figure 4.13.
As a illustrative example of specification validation, consider the following requirement pertaining to space S defined by an array a[1..N] of some type, a variable x of the same type, and an index variable k, which we use to address array a: Given that x is known to be in a , place in k the smallest index where x occurs. This is a variation of the example discussed in Section 4.2.1.
Examples of completeness properties include the following:
Examples of minimality properties include the following:
So far, we have looked at the requirements documentation, but we have not looked at candidate specifications; generating validation data independently of specification generation is important, for the sake of redundancy. Now, let us consider a candidate specification and check whether it is complete with respect to the completeness properties and minimal with respect to the minimality properties. We consider specification F2, introduced in Section 4.2.1 as
To prove that F2 refines V1, we must prove that F2 has a larger domain than V1 and that the restriction of F2 to the domain of V1 is a subset of V1. We find the following:
Clearly, V1L is a subset of F2L. We compute the restriction of F2 to V1L, and we find the following:
= {substitution}
= {simplification}
= {logic simplification}
= {logic simplification}
= {substitution}
We now consider the completeness property V2. To prove that F2 refines V2, we must prove that F2 has a larger domain than V2 and that the restriction of F2 to the domain of V2 is a subset of V2. We find the following:
Clearly, V2L is a subset of F2L. We compute the restriction of F2 to V2L, and we find the following:
= {substitution}
= {simplification, redundancy}
= {logic}
= {substitution}
We now consider the completeness property V3. To prove that F2 refines V3, we must prove that F2 has a larger domain than V3 and that the restriction of F2 to the domain of V3 is a subset of V3. We find the following:
Clearly, V3L is a subset of F2L. We compute the restriction of F2 to V3L, and we find the following:
= {substitution}
= {simplification}
= {logic}
= {simplification, redundancy}
= {substitution}
We turn our attention to checking the minimality of F2 with respect to W1 and W2.
Because F2 and W1 have the same domain, the only way to prove that F2 does not refine W1 is to prove that is not a subset of W1. To this effect, we compute the following:
= { F2 and W1 have the same domain}
which is not a subset of W1. Hence F2 is minimal with respect to W1. We can prove, likewise, that it is minimal with respect to W2. Indeed, F2 does not preserve x, nor a.
The introduction of the refinement ordering introduced in this chapter enables us to revisit a concept we had discussed in Chapter 2, namely, the contrast between reliability and safety. As we remember, the reliability of a system is its ability/likelihood of avoiding failure whereas the safety of a system is its ability/likelihood of avoiding catastrophic failure; because catastrophic failures are failures, one may be tempted to argue that a reliable system is necessarily safe but that is not the case. Indeed, reliability and safety are not logical/Boolean properties but stochastic properties, hence the argument that catastrophic failures are failures does not enable us to infer that reliable systems are necessarily safe. Rather, because the stakes attached to meeting the safety requirements are much higher than those attached to meeting the reliability requirement, the threshold of probability that must be reached for a system to be considered safe is much higher than the threshold of probability that must be reached for a system to be considered reliable.
This idea can be elucidated by means of the refinement ordering: Let R be the specification that represents the reliability requirements of a system, and let F be the specification that represents its safety requirements. For the sake of illustration, we consider a simple example of a system that controls the operation of traffic lights at an intersection.
The following observations are typical of a reliability–safety relationship:
The Figure 4.14 shows specifications R and F, ordered by refinement, and illustrates the relationship between the various possible behaviors of candidate programs, with corresponding probabilities of the behaviors in question: reliable behavior, (possibly unreliable but) fail-safe behavior, and unsafe behavior.
Whereas specifications we have studied so far are adequate for specifying programs that take an input (or initial state) and map it onto an output (or final state), they are inadequate to represent programs whose response depends not only on their input but also on their internal state; the subject of this section is to explore ways to specify such systems.
As we recall from our discussion in Section 4.1, specifications have to have two key attributes, which are formality and abstraction. We can achieve formality by using a mathematical notation, which associates precise semantics to each statement. As for abstraction, we can achieve it by ensuring that the specifications describe the externally observable attributes of candidate software products, but do not specify, dictate, or otherwise favor any specific design or implementation.
We consider the following description of a stack data type: A stack is a data type that is used to store items (through operation push()) and to remove them in reverse order (through operation pop()); operation top() returns the most recently stored item that has not been removed, operation size() returns the number of items stored and not removed, and operation empty() tells whether the stack has any items stored; operation init() reinitializes the stack to an initial situation, where it contains no elements. Imagine that we want to specify a stack without saying anything about how to implement it. How would we do it? Most data structure courses introduce stacks by showing a data structure made up of an array and an index into the array and by explaining how push and pop operations affect the array and its index; but such an approach violates the principle of abstraction since it specifies the stack by describing a possible implementation thereof. An alternative could be to specify the stack by means of an abstract list, along with list operations, without specifying how the list is implemented. We argue that this too violates the principle of abstraction as it dictates a preferred implementation; in fact, a stack does not necessarily require a list of elements, regardless of how the list is represented, as we show here.
init(): {n=0;}
push(a): {n=n+1;} // a is the only value that can be //stacked
pop(): {if (n>0) {n=n-1;}}
top(): {if (n>0) {return a;} else {return error;}}
size(): {return n;}
empty(): {return (n==0);}
init(): {n=1;}
push(a): {n=2*n+code(a);} // where code(a) maps the two //symbols onto 0 nd 1.
pop(): {if (n>1) {n=n div 2;}}
top(): {if (n==1) {return error;} else {return decode(n mod 2);}} // ecode() is the inverse of code().
size(): {return floor(log2(n));}
empty(): {return (n==1);}
Hence, for the sake of abstraction, we resolve to specify the stack by describing its externally observable behavior, without making any assumption, regardless of how vague, about its internal structure. To this effect, we specify a stack by means of three parameters, which are as follows:
From the set of inputs X, we build the set of input histories, H, where an input history is a sequence of inputs; this is needed because the behavior of the stack is not determined solely by the current input but involves past inputs as well.
We can go on describing possible input histories and corresponding outputs. In doing so, we are specifying how operations interact with each other but we are not prescribing how each operation behaves; this leaves maximum latitude to the designer, as mandated by the principle of abstraction.
It is clearly impractical to specify data types by listing elements of their relations; in the next section, we explore a closed-form representation for such relations.
We propose to represent the relation of a specification by means of an inductive notation, where we do induction on the structure of the input history; this notation includes two parts, which are as follows:
As an illustration, we represent the specification of the stack using axioms and rules. Throughout this presentation, we let a be an arbitrary element of itemtype and y an arbitrary element of Y; also, we let h, h′, h″ be arbitrary elements of H and h+ an arbitrary non-null element of H.
Axioms. We use axioms to represent the output of input histories that end with an operation in set VX (that reports on the state), namely in this case top, size, and empty. It is understood that input histories that end with an operation in set AX (that affects the state) produce no meaningful output; hence we assume that for such input histories, the output is any element of Y.
Seeking the top of an empty stack returns an error.
Operation top returns the most recently stacked item.
The size of an empty stack is zero.
An initial stack is empty.
A stack that contains element a is not empty.
Rules: Whereas axioms characterize the behavior of the stack for simple input sequences, rules establish relations between the behavior of the stack for complex input histories and their behavior for simpler input histories.
Operation init reinitializes the stack; whether sequence h′ intervened prior to init or did not makes no difference for the future behavior (h) of the stack.
A pop operation on an empty stack has no effect: whether it occurred or did not occur makes no difference for the future behavior (h) of the stack.
A pop operation cancels the most recent push: whether the sequence push(a).pop occurred or did not makes no difference to the future behavior of the stack, though not to the present (if h ends with an operation in VX, we could not say that stack(init.h)=stack(init.h.push(a).pop) as the left-hand side returns a specific value but the right-hand side returns an arbitrary value).
We assume that the stack is unbounded; hence any push operation increases the size by 1.
Removing a push or adding a pop to the input history of a stack makes it more empty (i.e., if it was empty prior to removing push or adding pop, it is a fortiori empty afterward).
VX operations leave no trace of their passage; once they are serviced and another operation follows them, they are forgotten: whether they occurred or did not occur has no impact on the future behavior of the stack.
We have written a closed-form specification of the stack in such a way that we describe solely the externally observable properties of the stack, without any reference to how a stack ought to be implemented; a programmer who reviews this specification has all the latitude he/she needs to implement this stack as he/she sees fit.
We discuss how to represent the specification of a queue, in the same way that we wrote the specification of a stack earlier. We represent in turn the input space (from which we infer the set of input histories), then the output space, then the relation, which we denote by queue.
Input Space. We let X be defined as X = {init, dequeue, front, size, empty} ∪ {enqueue} × itemtype. We partition this set into AX = {init, enqueue, dequeue} and VX = {front, size, empty}. We let H be the set of sequences of elements of X.
Output Space. We let the output space be defined as Y = itemtype ∪ integer ∪ Boolean ∪ {error}.
Axioms. We propose the following axioms:
Rules: We propose the following rules.
As a third illustrative example, we discuss how to represent the specification of a set, in the same way that we wrote the specifications of a stack and a queue earlier. We represent in turn the input space (from which we infer the set of input histories), then the output space, then the relation, which we denote by set.
Input Space: Before we introduce the input space, let us review quickly what we want this set abstract data type (ADT) to do: we want the ability to reinitialize the set, pick a random element of the set, enumerate all its elements, and return its smallest element and its largest element (assuming its elements are ordered). Also, we want to be able to insert, delete, and search designated elements of the set. Hence, we let X be defined as
X = {init, pick, min, max, enumerate, size} ∪ {insert, delete, search}× itemtype.
We partition this set into AX = {init, insert, delete} and VX = {pick, min, max, size, enumerate, search}. We let H be the set of sequences of elements of X.
Output Space: We let the output space be defined as
Y = itemtype ∪ P(itemtype) ∪ {error} ∪ integer ∪ boolean,
where P(itemtype) is the power set of itemtype.
Axioms: We propose the following axioms:
Rules: We propose the followng rules:
In the previous section, we have written specifications of a number of ADTs, namely, a stack, a queue, and a set. How do we know that our specifications are valid, that is, that they capture all the properties we want them to capture (completeness) and nothing else (minimality)? To bring a measure of confidence in the validity of these specifications, we envision a validation process, similar to the process we advocated in Section 4.2.3, though this time (for the sake of simplicity) we focus solely on completeness. We imagine that while we are writing these specifications, an independent verification and validation team is generating formulas of the form
for different values of h and y, on the grounds that whatever we write in our specification should logically imply these statements. Then the validation step consists in checking that the proposed formulas can be inferred from the axioms and rules of our specification. If they do, then we can conclude that our specification is complete with respect to the proposed formulas; if not, then we need to check with the verification and validation team to see whether our specification is incomplete or perhaps the validation data is erroneous.
For the sake of illustration, we check whether our specification is valid with respect to the formulas written in Section 4.3 as sample input/output pairs of our stack specification:
For V1, we find the following:
stack(pop.init.push(a).init.push(a).top)
= {by the init Rule}
stack(init.push(a).top)
= {by the second top axiom}
a. QED
For V2, we find the following:
stack(pop.init.pop.push(a).push(b).top.top)
= {by the init Rule}
stack(init.pop.push(a).push(b).top.top)
= {by the VX Rule pertaining to top}
stack(init.pop.push(a).push(b).top)
= {by the second top axiom}
b. QED
For V3, we find the following:
stack(init.pop.push(a).pop.push(a).pop.push(a).size)
= {by the init pop Rule}
stack(init.push(a).pop.push(a).pop.push(a).size)
= {by the push pop Rule, applied twice}
stack(init.push(a).size)
= {by the size Rule}
1+stack(init.size)
= {by the size axiom}
1. QED
For V4, we find the following:
stack(pop.push(a).pop.init.pop.push(a).top.push(a).top.push(a).pop.empty)
= {by the init rule}
stack(init.pop.push(a).top.push(a).top.push(a).pop.empty)
= {by the init pop rule}
stack(init.push(a).top.push(a).top.push(a).pop.empty)
= {by the VX rule, as it pertains to top}
stack(init.push(a).push(a).push(a).pop.empty)
= {by the push pop rule}
stack(init.push(a).push(a).empty)
⇒ {by the empty rule}
stack(init.push(a).empty)
= {by the empty axiom}
false.
If the left-hand side logically implies false, then it is false. QED
For V5, we find the following:
stack(init.pop.pop.pop.push(a).push(b).push(c).top.push(c).push(b).empty.top)
= {by virtue of the VX rules, applied to empty}
stack(init.pop.pop.pop.push(a).push(b).push(c).top.push(c).push(b).top)
= {by virtue of the second top axiom}
b. QED
Because our specification has survived five tests unscathed, we gain a bit more confidence in its validity.
The main ideas/concepts that you need to keep from this chapter are the following:
Compute the following:
Compute the following expressions:
From these three relations, compose the following relations: female (the complement of male), mother, father, daughter, son, half-sibling, sibling, half brother, half sister, brother, sister, maternal grandfather, paternal grandfather, maternal grandmother, paternal grandmother, grandson, granddaughter, uncle, aunt, niece, nephew, and cousin. All of these relations can be built from P and M.
Do these specifications satisfy the compatibility condition? If so, compute their join. If not, explain why.
Do specifications F5 and F9 satisfy the compatibility condition? If so, compute their join. If not, explain why.
Do specifications F5 and F10 satisfy the compatibility condition? If so, compute their join. If not, explain why.
4.1. This ADT stores and retrieves elements in a linearly ordered structure. We let the list be defined by the following operations:
AX-operations: These are operations that alter the state of the ADT but produce no visible output.
VX-operations: These are operations that return values but do not change the state.
Write this specification and validate it. If this work is done in a team, divide the team in two, for specification and specification validation. Hint: Convert all insert operations into insertlast operations, and all delete operations into deletelast operations; then specify insertlast and deletelast.
4.2. In discrete mathematics, a multiset is a collection of objects where duplication is permitted. We let multiset be defined by the following operations:
Write this specification and validate it. If this work is done in a team, divide the team in two, for specification and specification validation.
Software specification has been the subject of active research since the early days of software engineering, highlighting both the criticality and the difficulty of this phase and its products; it is impossible to do justice to all the work that was published in this area or to any significant portion thereof. We will merely cite a few of the specification languages that have emerged: Z, a relational notation that has been widely used in industry and academia (Spivey, 1998); B, a relational notation that has an object-oriented flavor, and supports refinement, in addition to specification (Abrial, 1996); Alloy, a language inspired by Z and B and used to represent structures by means of sets of constraints (Jackson, 2011). For a general overview of specification languages and issues, consult (Habrias and Frappier, 2013).