1

A Brief Introduction to Measurement Theory and to the Essays

C. Wade Savage

University of Minnesota

Philip Ehrlich

Brown University

 

Until approximately three decades ago, measurement theory was widely assumed to be the avocation of a few physicists, mathematicians, and philosophers of science, and the obsession of a few social scientists who hoped to secure for their fields the authority enjoyed by mathematical physics and chemistry. The recognition of a legitimate, specialized field of inquiry called “measurement theory” is of even more recent origin.

Admittedly theories concerning the nature of quantity date from at least the ancient Greeks, as described in Aristotle’s writings on these subjects in the Categories and the Metaphysics (c. 330 B.C.) and later axiomatized in Euclid’s Elements (c. 300 B.C.). Furthermore, portions of Euclid’s treatise can be regarded as a theory of measurement of spatial extent, in the sense that one line segment, surface, or solid “measures” another by being compared with it. However, this is what we now call synthetic geometry. It does not assign numbers in the abstract, arithmetic sense to nonnumerical continua such as lengths, areas, and regions; therefore, it cannot compare magnitudes within such continua by comparing such numbers. Euclidean geometry compares lengths, areas, and regions by comparing physical, nonnumerical ratios of these magnitudes and in effect uses such ratios in the place of our arithmetic numbers.

The analytic geometry Descartes and Fermat pioneered assumes that numerical measures of length, area, and volume can be assigned to line segments, surfaces, and solids by counting congruent unit objects in a collection that successively approximates the object being measured. It is a theory of measurement in the contemporary sense in which measurement is the assignment of abstract arithmetic numbers to objects. However, it is not a full, explicit theory; it makes the assumption of measurability naively, without attempting to justify it in the manner currently required.

Today, measurement in general is taken to be the assignment of numbers (numerals, say the nominalists) to entities and events to represent their properties and relations. Furthermore, measurement theory is supposed to analyze the concept of a scale of measurement or numerical representation, distinguish various types of scale and describe their uses, and formulate the conditions required for the existence of scales of various types, not just for the case of length and other extensive properties but for measurable properties of all types.

The previous characterization of contemporary measurement theory is heavily influenced by the formalist, representationalist approach to the subject presented in Krantz, Luce, Suppes, and Tversky (1971), hereafter referred to as KLST. As the following essays illustrate, this approach currently dominates the field, serving as a model for most measurement theorists and a target for the rest. To provide an organizing context for the essays and to orient nonspecialist readers, we offer a brief historical survey of contemporary measurement theory.

Contemporary measurement theory can be said to begin with Helmholtz’s Counting and measuring (1887) and Hölder’s Die Axiome der Quantität und die Lehre vom Mass (1901), in which axioms for such (extensive) properties as length and mass were formulated. These works, together with influential treatments by Bertrand Russell (1903) and N. R. Campbell (1928, 1957/1920), created what may be called the conservative conception of measurement. On this conception counting is defined as placing the members of a collection in one-to-one correspondence with a segment of the natural numbers, and direct (extensive) measurement of a property of an object is then defined as counting concatenations of standard objects that approximately equal the object with respect to the property, where the concatenation has, like addition on numbers, such properties as commutativity, associativity, and so on. What most philosophers knew of formal measurement at the close of the first half of the century was gleaned largely from Campbell (1957/1920) and Cohen and Nagel (1934). Psychologists had, by that time, been exposed to a similar treatment by Bergmann and Spence, (1953/1944).

It appears that, on the conservative definition, length, weight, duration, angle, electric charge, and several other physical properties are directly measurable. However, hardness and temperature seem not to be directly measurable. In addition, such allegedly psychological attributes as hue, pitch, taste, and pain intensity seem even more clearly not to be directly measurable. These latter properties do not seem to possess a natural, empirical operation of concatenation that can be used to define their measure in the manner required by the conservative conception. Of course such properties may still be indirectly measurable by measuring a directly measurable, correlated property. Thus, temperature is measured by measuring the length of a column of thermometric fluid in a thermometer. (Henceforth, “measurable” will mean “directly measurable.”) Some of the conservative theorists (Nagel, 1931, for example) distinguished extensive and intensive attributes, and they tentatively conceded that some of these other properties might be intensively measurable. Extensive measurement is accomplished by counting concatenations and is supposed to make such statements as “The length of a is n times greater than the length of b” meaningful; intensive measurement does not proceed by counting concatenations and is supposed to make only such statements as “The temperature of a is greater than the temperature of b” meaningful. However, many conservative theorists did not recognize intensive measurement and claimed that properties without an empirical concatenation are not properly said to be measurable.

Even as the conservative view was being formulated, many psychologists, especially psychophysicists such as Thurstone (1959), were insisting that various psychological properties are measurable; indeed some believed that every property is measurable. In the words of Guilford’s (1954/1936) treatise:

Many psychologists adopt the dictum of Thorndike that “Whatever exists at all exists in some amount” and they also adopt the corollary that whatever exists in some amount can be measured: the quality of handwriting, the appreciation of a sunset, the attitude of an individual toward Communism, or the strength of desire in a hungry rat. (p. 3)

Accordingly, at mid-century, the psychophysicist S. S. Stevens (1946, 1951) formulated what may be called the liberal conception of measurement, according to which measurement is defined simply as the assignment of numbers to things and properties according to rule. (For the distinction between the two conceptions of measurement, see Savage, 1970, chapters 4 and 5, where they are called the “narrow” and “broad” views.) Depending on the rule employed, the numerical assignment will constitute a scale of some type. Among conservatives, philosophical questions about measurement often took the form “Is E measurable?” where E is some property such as hardness, loudness, hue, pain, afterimage area, perceived length, desire, subjective time, welfare, probability, prestige of occupation, value, beauty, and so forth. The liberal theorists argued that this question is fruitless, because everything is held to be capable of measurement of some sort, and substituted for it the question “Of what sort of measurement is E capable?”. They concluded that the task of measurement theory is to classify and describe the types of measurement.

Stevens (1946, 1951) distinguished four main types of scales of measurement. A nominal scale represents only differences among objects (for example, numbers assigned to football players). An ordinal scale represents the order of objects with respect to some property (for example, numbers used to rank restaurants). An interval scale represents intervals of a property (for example, the Centigrade scale of temperature). A ratio scale represents ratios of a property (for example, the inch scale of length). Stevens suggested that scales may also be classified by means of the transformations that leave the scale-form invariant, the “admissible” transformations. If ϕ is the original scale, and ϕ’ is the transformed scale, then the defining transformation for an interval scale is ϕ’ = kϕ + c (for example, ϕ’ is the Fahrenheit scale of temperature; ϕ is the Centigrade scale; k = 1.8; and c = 32); the defining transformation for a ratio scale is ϕ’ = kϕ (for example, ϕ’ is the centimeter scale of length; ϕ is the inch scale; and k = 2.54). On the liberal conception, measurement does not require counting, and it does not require an operation of concatenation on the measured objects. This claim is intuitively obvious for nominal and ordinal scales with a small, finite number of values; for here, numbers can be assigned to the objects one by one, checking each assignment to assure that it represents identity and difference and order. However, Stevens also claimed to have constructed interval and ratio scales of continuous perceptual properties such as perceived loudness, perceived electric shock, and perceived heaviness from the numerical responses of subjects in psychological experiments, without employing any operation of concatenating physical objects, or sensations, or responses.

In the two decades following Stevens’ formulation of his conception of measurement, several logical empiricist philosophers of science provided semiformal treatments of the subject—Hempel (1952), Carnap (1966), and Ellis in his Basic concepts of measurement (1966). In the operationalist-instrumentalist tradition of Mach, Bridgman, and Stevens, Ellis argued that the function of measurement is not to represent independently existing nonnumerical quantities; indeed quantities (except when identified with an ordering relation) are in effect created by the operations that measure them. Consequently one cannot choose between two additive scales of the same quantity that use different operations of concatenation or even between an additive and a nonadditive scale, on the ground that one better represents length than the other. The only rational ground is that one scale leads to simpler numerical laws of length, area, mass, force, and so forth than the other. In so arguing, Ellis adopted a conventionalist view of measurement to some extent. To the extent that he claimed quantities do not exist independently of their measuring operations, his position is antirealist (operationalist). Carnap’s writings on measurement express a more strongly conventionalist, operationalist view than those of Ellis. Hempel’s work is comparatively neutral on the metaphysical issues involved.

Most of the major contributors to formal measurement theory have been mathematicians and psychologists. For most of the century, psychologists had been concerned with the theory of measuring psychological magnitudes, such as subjective brightness and hue, pain and pleasure, and attitudes and preferences. Most of this work was published in psychological journals and labeled “psychometrics” or “psychological scaling,” as if to imply that the issues under discussion concerned only psychological magnitudes. Research of this sort was summarized during the period following Stevens’ formulation in Torgerson’s Theory and methods of scaling (1958). Meanwhile a group of mathematical logicians and mathematical psychologists, building on the results of Cantor, Hölder, Birkhoff, and other mathematicians, began to employ model-theoretic and set-theoretic methods to investigate the conditions and properties of scales of measurement in general, scales from areas as diverse as economics, psychophysics, and physics. This work culminated in 1971 in the landmark Foundations of measurement, Volume I by Krantz et al., a book that effectively defines the field known today as formal measurement theory. Volumes II (1989) and III (1990) have recently appeared and will undoubtedly have a comparably profound impact. The subject thus defined is usefully surveyed in Roberts’ Measurement theory (1979), with special emphasis on applications to decision theory and the social sciences.

Simply described, the approach of KLST is as follows. A nonnumerical (empirical) relational structure is a set of nonnumerical entities E, together with relations (including operations and distinguished elements) S1, S2, …, Sn on the set E. A numerical (abstract) relational structure is a set of numerical entities N, together with relations Rl, R2, …, Rm on the set N. A scale of measurement ϕ is a relation-preserving function (i.e., a homomorphism) from a nonnumerical relational structure to a numerical relational structure. The type of the relational structure is defined by the number and degree of the relations in the structure, and the scale type can be defined as the type of the relational structure mapped. Scale type can also be defined in terms of the scale transformations that preserve the mapping. A nominal scale maps an empirical relational structure into 〈N, =〉, an ordinal scale into 〈N, >〉, an interval scale into 〈N, >, –〉, and an additive scale into 〈N, >, +〉 (or into similar structures with > replaced by ≥).

The first task of formal measurement theory is to discover the precise conditions required for the existence of scales of various types. More precisely, the task is to discover and prove representation theorems asserting the existence of scales of certain types if and (when possible) only if the nonnumerical relational structure in question satisfies certain sets of conditions. For example, let 〈E Image, ⊕〉 be a nonnumerical structure with Image a binary relation on E ⊕ a closed binary operation on E; furthermore, suppose ab is defined as a Image b and a Image a, and a is defined as a Image b and not (b Image a). Given these assumptions, there exists a positive real-valued function ϕ:ER +, such that, for all a, bE Nonnumerical relational structures of this kind are known as closed (Archimedean) positive extensive structures, a paradigm example of which is a set of straight, rigid rods where ⊕ is the operation of joining rods end-to-end along a straight line and a Image b means that, when rods a and b are placed side by side with one pair of endpoints coinciding, either the opposite pair of endpoints coincide, or the opposite endpoint of a extends beyond that of b.

(i)

a Image b if and only if ϕ(a) ≥ ϕ(b) and

(ii)

ϕ(ab) = ϕ(a) + ϕ(b)

if and only if 〈E, Image, ⊕〉 satisfies the following conditions for all a, b, c, dE:

1.   (Weak Order) Image is a reflexive, transitive, and connected relation on E.

2.   (Weak Associativity) a ⊕ (bc) = (ab) ⊕ c.

3.   (Monotonicity) a Image b iff ac Image bc iff ca Image cb.

4.   (Positivity) ab Image a.

5.   (Strongly Archimedean) If a > b, then for any c, dE there is a positive integer n such that n(a) ⊕ c Image n(b) ⊕ d where n(a) is defined inductively as 1(a) ≈ a, (n + 1)an(a) ⊕ a.

Although examples like this have played an important role in the literature on formal measurement theory, they are unrealistic from the standpoint of practical measurement, because the empirical conditions described by the axioms are rarely satisfied. For example, the condition that operation ⊕ is closed entails that E is infinite, which in practice rarely, if ever, obtains. Consequently, in recent years, greater emphasis has been placed on conditions for “constructible” scales, scales constructed by operations that under favorable conditions can actually be performed.

The second part of the task of formal measurement theory is to discover and precisely characterize the classes of admissible transformations for scales of various types—transformations that produce a new scale ϕ’ that maps the same relations as the original ϕ. One says that a scale type is unique up to a certain subclass of the class of all possible transformations. Thus, additive scales (e.g., the Archimedean extensive structures described previously) are unique up to multiplication by a constant positive real number. Interval scales are unique up to linear transformations, ordinal scales unique up to monotone (order-preserving) transformations, and nominal scales unique up to one-to-one transformations. Accordingly theorems that assert the uniqueness of a scale of a certain type are called uniqueness theorems, and the second task of formal measurement theory is to discover and prove such theorems. (Stevens’ system of scale classification is controversial and is criticized in Ellis, 1966, chapter 4; Savage, 1970, pp. 166–172; and Krantz et al., 1971, p. 11.)

As described above, the task of formal, representationalist measurement theory is to prove the existence and uniqueness of scales of measurement, where such scales are taken to be homomorphic functions from sets of empirical objects to sets of numbers. This description is undoubtedly overly simple and narrow. It assumes, for example, that scales are functions into number systems and a fortiori that representing structures are necessarily numerical relational structures. However, as the practitioners of formal measurement theory are well aware, the outcomes of measurement may be vectors or other nonnumerical entities. Consequently what we have described above is merely a segment, albeit an important segment, of their program.

In what follows, we briefly describe and comment on the essays. The formalist, representationalist approach dominates measurement theory; therefore, it seems appropriate to begin the volume with essays by some of the leading architects of this approach, following these with essays that attempt to broaden the approach, and these in turn with essays that criticize the approach. Essays with more purely metaphysical or philosophical themes, even if technically articulated, appear at the end.

The Archimedean condition of extensive measurement, which mirrors the familiar Archimedean property of the real number system, is often stated as follows:

5’.   For all a, bE, there is a positive integer n such that, if a > b, then n(b) > ~ a,

where n(a) is defined inductively as 1(a) ≈ A and (n ⊕ 1)a = n(a) ⊕ a. Historically axiom 5’. was employed to rule out the existence of magnitudes (and differences in magnitudes) that are infinitely small compared with others. It is formulated in terms of a concatenation operation; thus, the axiom clearly fails to apply to structures that do not possess such an operation. Note that axioms 1.–4. and 5’. are not sufficient to guarantee the representation guaranteed by axioms 1.–5.

Luce and Narens (chapter 2 in this volume) seek a generalization of axiom (5’) that will apply to structures that are in some sense Archimedean and yet possess no concatenation operation. They assume that one purpose of an Archimedean axiom is to insure that the structures satisfying it are imbeddable into a classical continuum (i.e., an open Dedekind-complete ordered set with a denumerable order-dense subset). Consequently they attempt to isolate a class of continuously ordered structures that may be reasonably regarded as intrinsically Archimedean and to characterize Archimedean structures in terms of their ability to be imbedded in intrinsically Archimedean structures in appropriate ways. They are able to report only partial results at the present time, owing to the difficulty in achieving a satisfactory characterization of an intrinsically Archimedean structure. They note that any acceptable generalization of the concept of Archimedeanness must involve higher-order logical concepts, which, according to some theorists, entails that the concept is nonempirical.

A broad conception of representationalist measurement theory seems required to accommodate the essay by Suppes and Zanotti (chapter 3 in this volume). Their subject is the probability (and error) of measurements, for example, the probability of obtaining a given numerical value in measuring the length of a rod with a meterstick. They define a probability in the usual measure-theoretic manner as a function to the closed interval of real numbers [0,1] from a set of subsets (of a set V of basic events or propositions) closed under union, intersection, and complementation, a function that satisfies the classical Kolmogorov axioms. Such a set of subsets is called a Borel field, or Boolean algebra. The Kolmogorov axioms are, where Pr(e) is the probability of an event (simple or complex): for all e in V (a) Pr(e) ≥ 0, (b) if e is certain (e = V), then Pr(e) = 1, (c) if e1, and e2 are incompatible, then Pr(e1) + Pr(e2) = Pr(ele2). In the usual treatment, the probability of a measurement r for object a is approximated by the number of times r is obtained divided by the total number of measurements of a, and then the mean value and the variance of the measurements are calculated from the distribution of the probabilities of various measurements. The sum of differences between measurements and the mean value is the first moment, which in a normal (symmetrical) distribution is zero. The sum of the squares of these differences is the second moment, or variance, and the square root of the variance is the standard deviation. Instead of proceeding in this familiar manner, the authors define the “qualitative” nth moment of a without recourse to a (quantitative) probability function and then prove a representation theorem to the effect that their qualitative moments correspond to the quantitative moments of the usual treatment.

The formalist-representationalist approach to measurement has long been criticized on the ground that its narrow focus has neglected important topics in the field (see Adams, 1966, for example). A complaint of this type can be associated with the essay by Adams (chapter 4 in this volume), because his topic is usually neglected by representationalists. It concerns the empirical status of axioms that characterize empirical relational structures and constitute necessary and sufficient conditions for the existence and uniqueness of scales of various sorts. These axiomatized relational structures are called “theories of measurement” by Adams and the representationalists, in conformity with the model-theoretic conception of a scientific theory that has emerged from the approach (see Suppes, 1967). We eschew this usage in favor of “axiomatized structure” or simply “structure” to prevent confusion between a “theory of measurement” in the sense of an axiomatized structure and “measurement theory”. Adams’ example is the measurement of probability. He distinguishes various classes of axiomatized structures (basic de Finetti structures, n-partitionable structures, Koopman Archimedean structures, etc.), and he describes the inclusion relations between them. He classifies the types of data that can be consistent or inconsistent with these structures and provides the following definitions: (a) T is empirically as strong as T’ with respect to data of type D, if and only if every set of data of type D that is consistent with T is also consistent with T’; and (b) an axiom contributes empirical content to T, if adding it to T yields a T’ that is empirically stronger than T. His main theorem describes the relations of empirical strength and empirical equivalence among the various axiomatized structures and shows that axioms may contribute empirical content to a structure with respect to some data and not with respect to others. However, according to Adams, the nonnecessary axioms of equipartitionability and Archimedeanness do not contribute empirical content to certain probabilistically representable structures with respect to data of any sort. These are nonempirical without qualification and, as some have speculated, are employed simply to prove the desired representation theorem on such structures. (Presumably these will not be interpreted realistically, for which see below.) Other nonnecessary axioms, for example, axiom P6’ of L. J. Savage’s system, do add empirical content with respect to data of a special kind.

Kyburg (chapter 5 in this volume) advances another objection of the “narrowness” type: He complains that representationalists have neglected the problem of errors of measurement, and he proposes a theory of such errors to remedy the neglect. If measurements of three rods, a, b, and c indicate that a is longer than b, b longer than c, and c is longer than a, then one cannot accept all three measurements without contradicting the axiom of transitivity of the longer than relation. Kyburg proposes that here, as elsewhere in theory testing, the experimenter should follow two principles: (a) reject as erroneous the smallest number of data that are inconsistent with the axioms of the theory, and (b) distribute the errors evenly across different kinds of measurement. He suggests that the first principle corresponds to the familiar rule in statistics of minimizing the standard deviation (root mean square difference) in a collection of numerical data. Numerous examples are provided to illustrate the application of the two principles. He notes that, in applying the principles, the theory being tested is used to determine what data may be used to test it. However, he finds no vicious circularity in the process.

A more sweeping narrowness objection is that the representationalist approach studies measurement in isolation from the various empirical theories—physical, biological, psychological, and so forth—in which it functions as a component and consequently misunderstands the process. This objection has been urged by the structuralist measurement theorists—for example, Sneed and his collaborators (Sneed, 1971; Balzer, Moulines, and Sneed, 1987)—and is here presented in the essay by Balzer (chapter 6 in this volume). After comparing the two approaches, he identifies several shortcomings in representationalism. First, it fails to realize that conditions for scales of measurement normally include axioms (laws) of the theory employing the scale and may or may not include axioms of the measured magnitude. For example, the mass of bodies is measured in collisions through the use of the law of mechanics that relates the masses of colliding bodies to their initial and final velocities. Second, representationalism fails to understand that the distinction between fundamental and derived measurement is not absolute but is relative to the theory in which the method is imbedded and that the one type of measurement is not intrinsically better than the other. Which is best will depend on the stage the theory has achieved in its construction. He contends that one achievement of the structuralist program is a correct and formally scrupulous substitute for the unacceptable distinction between theoretical (indirectly measurable) and observational (directly measurable) terms.

If, as the representationalist view assumes, measurement is the representation of empirical relational structures by means of numerical relational structures, then obviously there must be a strong similarity between the represented structure and the representing structure, if measurement is to be possible. Indeed, according to the representationalist, a scale of measurement is defined as a homomorphism from an empirical structure to a numerical structure. It is, therefore, natural to wonder whether the functions served by the numerical representation could not also be served by the nonnumerical counterpart. Simply stated, the function of numerical representations is to make possible the construction of numerical scientific theories that enable us to understand, predict, and control our nonnumerical environments. So the question is whether science without numerical representation is possible. Hartry Field, in his Science without numbers (1980), attempted to prove that it is possible where the science is classical mechanics, as (on his interpretation) Hilbert proved it for the science of Euclidean geometry. That is to say, he attempted to prove that synthetic mechanics is possible, as Hilbert proved that synthetic Euclidean geometry is possible.

Burgess (chapter 7 in this volume) argues that, in one sense, Field’s thesis is clearly true and that the interesting questions concern its significance. He describes, in full technical detail, a method for replacing analytic (numerical) classical physics with a synthetic (nonnumerical) theory that is supposed to capture the synthetic content of the analytic theory. The synthetic theory is constructed in two stages. First, numbers are coded as triples of collinear points (whose distance ratios uniquely determine the corresponding numbers), and relations on numbers are replaced by relations on point triples. Then a more natural synthetic theory is described by defining relations on point triples in terms of the geometrical relations of betweenness, congruence, equality of ratios of quantity differences, and quantity inequality. Burgess claims that every formula of the analytic theory has a counterpart in the synthetic theory such that the one is provable in its theory if and only if the other is provable in its theory. He concludes with a philosophical comparison of Field’s metaphysics and that of W. V. O. Quine. If antinominalism (Platonism) is the view that numbers, functions, and sets exist (or are justifiably held to exist), and nominalism is the view that they do not exist (or are not), then Field is a nominalist and Quine an anti-nominalist. If realism is the view that a theory is justifiably believed only on the ground that it reflects the world, then Field is a realist and Quine an antirealist. Burgess suggests that Quine’s antinominalism flows from his antirealism (which says that belief in numbers is justified by its utility, not by its correspondence to the world) and that Field’s nominalism flows from his realism (which says that belief in numbers is unjustified, because it does not causally reflect the world and cannot, because numbers are noncausal). Furthermore, he finds their disagreement too deep to be removed by a technical device that enables us to dispense with numbers in principle, if not in practice.

(Note the ambiguity of “realism.” Realism in philosophy of science is usually taken to be the view that the theoretical terms of scientific theories (e.g. “force,” “electron,” “π”) denote real entities and not convenient fictions, and operationalism or instrumentalism is its opposite. Realism in philosophy of mathematics is usually taken to be the view that numbers are real and not fictions or linguistic constructions, and nominalism is its opposite. Realism in measurement theory is usually taken to be the view that numerical relations such as order, difference, addition, multiplication, differentiation, and so forth represent real, nonnumerical empirical relations under appropriate scales of measurement, and conventionalism seems the best choice for its opposite. Realism in epistemology is the view that a theory is justifiably believed only on the ground that it reflects the world (and could be construed as including any or all of the realist positions above); coherentism, subjectivism, and skepticism are its contraries.

Like the synthetic description of the geometric linear continuum found in Euclid, Field’s description of the nonnumerical structures with which he would replace numerical structures of geometry and physics revives an ancient question of measurement theory: What is the difference between quality and quantity? between qualitative and quantitative relations? Koslow (chapter 8 in this volume) attempts to answer this question by relativizing the notions of a quantitative or qualitative relation to a partition (set of disjoint subsets) of the domain of the relation. Roughly a relation R(x1, …, xn) is defined as quantitative relative to partition M only if (a) replacement of a member xi of the n-tuple (x1, …, xn) by a different member of its partition element produces an n-tuple in the relation, and (b) if two n-tuples belonging to R differ in at most the ith component, then those two components belong to the same member of the partition. This definition enables him to show that qualitative (classificatory and comparative) relations and quantitative relations fall into distinct classes and also that many qualitative relations are numerical, and many quantitative relations are non-numerical. Several criticisms and clarifications ensue. Measurement is typically performed on quantitative empirical structures; therefore, representational theorists of measurement (KLST) are in error when they assert that numerical laws are restatements of qualitative relations. Furthermore, Field’s nominalist program should be interpreted as an attempt to show that science without numerical relations, not science without quantitative relations, is possible. The second half of the essay is devoted to a historical study of three quantitative, nonnumerical relations: sameness of ratios as it figures in Euclid’s Elements, sameness of temperature as it figures in Maxwell’s theory of heat, and sameness of quantity of matter (mass) as it figures in Newton’s mechanics.

As noted in the previous historical survey, the issues surrounding conventionalism and realism have been especially interesting to philosophers of measurement. They still are. Ellis (chapter 9 in this volume) attempts to rescue what is sound in his quasiconventionalist treatment of measurement (1966) from the holist-realist critique of recent philosophy of science. His earlier view was that any scale of measurement that represents empirical order is acceptable and that choices between acceptable scales are based on conventional, nonempirical criteria, such as simplicity in the resulting numerical laws and invariance of these laws among favored classes of scales (for example, classes of scales that transform under multiplication by a constant). He softens the earlier view by conceding to the holists that there is no useful distinction between conventional and empirical statements, and that criteria such as simplicity and invariance are as important as so-called empirical criteria in selecting theories that can face experience as a whole. Although insisting that spatial and temporal distance are relations and consequently yield to his earlier analysis, he concedes to the realists that quantities such as mass, charge, and spin are properties, and that whether scales of these quantities represent features in addition to linear order is a relevant consideration. He argues that these concessions do not diminish the importance and validity of the conventionalist program of analysis and rational reconstruction of scientific theory and practice, and he recommends its continuation in the theory of measurement and elsewhere.

Berka (chapter 10 in this volume) is completely unsympathetic to conventionalist and antirealist tendencies in measurement theory. He defends what he calls a materialist (or realist) view of measurement, according to which the object of measurement exists independently of and prior to the procedures used to measure it. One main rival to his view is the operationalist view of Bridgman and Stevens, on which measurement is any assignment of numerals according to any operational, empirical rule. His objection to this view is that the grounding operations are not always empirical and that most quantities cannot be defined by a single operation. The other main rival is the formalist view of measurement (KLST), which seeks sets of necessary and sufficient axioms for the existence and uniqueness of scales of measurement. He objects that the relations described by the axioms are not really empirical (or at least are unusable in empirical measurement procedures). He also questions the logical adequacy of the proofs and claims that the proofs are unnecessary, because the theorems proved could simply be assumed as axioms. On his materialist view (but not the others), derived magnitudes cannot become fundamental by a new choice of measuring operation, and units and nonabsolute zeros are not arbitrary but are grounded in empirical reality. He believes his view vitiates many attempts to extend the procedures of physical measurement to magnitudes in the behavioral and social sciences. He concludes with the case study of utility measurement to make the point.

Domotor (chapter 11 in this volume) makes contact with many of the metaphysical and measurement-theoretic questions heretofore considered. He recommends an interactionist (realist) approach to measurement in preference to the standard representationalist (empiricist, antirealist) approach. He contends that the representationalist approach views measurement as a device to make scientific theories conceptually and calculationally manageable. The goal of measurement theory on this approach is the discovery of axioms for qualitative structures that make it possible to derive a Ramsey sentence—the representation theorem—which entails that theoretical terms are eliminable in principle. The interactionist, on the other hand, views measurement as a real, causal interaction between a physical system and a measuring instrument, an interaction that can be characterized mathematically as an inner product on spaces of intensive and extensive quantities generated by rings of measurable magnitudes. Domotor illustrates this idea with a representation of a qualitative, comparative probability structure by means of a quantitative, Kolmogorov probability structure. He proposes that, by treating measurement structures as a mathematical category, one can define a projection map from quantitative to qualitative structures, in addition to the usual injective map (representation) in the reverse direction. The discovery of such projections is, on his approach, as important to measurement as the discovery of injections; he believes it will reduce the confusing proliferation of representation theorems and unify the field of measurement.

REFERENCES

Adams, E. W. (1966). On the nature and purpose of measurement. Synthese, 16, 125–169.

Aristotle (c. 330 B.C.). The basic works of Aristotle. New York: Random House, 1941.

Balzer, W., Moulines, C. Ulises, & Sneed, J. D. (1987). An architectonic for science: The structuralist program. Dordrecht & Boston: Kluwer.

Bergmann, G., & Spence, K. W. (1953/1944). The logic of psychophysical measurement. In H. Feigl & M. Brodbeck (Eds.), Readings in the philosophy of science (pp. 103–119). New York: Appleton-Century-Crofts. Reprinted from Psychological Review, 51, 1944 1–24.

Campbell, N. R. (1957). Foundations of science. New York: Dover. Reprinted as Physics: The elements, 1920. Cambridge: Cambridge University Press, 1920.

Campbell, N. R. (1928). An account of the principles of measurement and calculation. London: Longmans, Green.

Carnap, R. (1966). Philosophical foundations of physics. New York: Basic Books.

Cohen, M. R. & Nagel, E. (1934). An introduction to logic and scientific method. New York: Harcourt, Brace.

Ellis, B. (1966). Basic concepts of measurement. Cambridge: Cambridge University Press.

Euclid (c. 300 B.C.). The elements: Translated by T. L. Heath, The thirteen books of Euclid’s elements (2nd ed., Vols. I–III). New York: Dover, 1956.

Field, H. (1980). Science without numbers. Princeton: Princeton University Press.

Guilford, J. P. (1954). Psychometric methods (rev. ed.). New York: McGraw-Hill. First edition 1936.

von Helmholtz, H. (1887). Zählen und Messen: erkenntnis-theoretisch. In Philosophische Aufsätze Eduard Zeller gewidmet, Leipzig. English translation by C. L. Bryan, Counting and measuring, Princeton: van Nostrand, 1930.

Hempel, C. G. (1952). Fundamentals of concept formation in empirical science. International Encyclopedia for Unified Science (Vol. II, No. 7). Chicago: University of Chicago Press.

Hölder, O. (1901). Die Axiome der Quantität und die Lehre vom Mass. Berichte über die Verhandlungen der Königlich der sächsischen Gesellschaft der Wissenschaflen, Mathematisch-physische Klasse, Leipzig, 53, 1–64.

Krantz, D., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of measurement: Volume I. Additive and polynomial representations. New York & London: Academic Press. Suppes, et. al., vol. II, Luce et. al., 1989, vol. III, 1990.

Nagel, E. (1931). Measurement. Erkenntnis, 2, 313–333.

Quine, W. V. O. (1961). From a logical point of view (2nd ed.). Cambridge, MA: Harvard University Press.

Roberts, F. S. (1979). Measurement theory: With applications to decisionmaking, utility, and the social sciences. Reading, MA: Addison-Wesley.

Russell, B. (1903). Principles of mathematics. New York: Norton.

Savage, C. W. (1970). The measurement of sensation. Berkeley & Los Angeles: University of California Press.

Sneed, J. D. (1971). The logical structure of mathematical physics. Dordrecht/Boston/London: Reidel.

Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680.

Stevens, S. S. (1951). Mathematics, measurement, and psychophysics. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1–49). New York: Wiley.

Suppes, P. (1967). What is a scientific theory? In S. Morgenbesser (Ed.), Philosophy of science today (pp. 55–67). New York & London: Basic Books.

Thurstone, L. L. (1959). The measurement of values. Chicago: University of Chicago Press.

Torgerson, W. S. (1958). Theory and methods of scaling. New York: Wiley.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset