Chapter 11

Cluster Analysis

Abstract

This chapter discusses the circumstances in which the cluster analysis technique can be used. Starting from two observations, different distance (dissimilarity) measures for metric variables and similarity measures for binary variables are calculated. Different hierarchical agglomeration schedules are described, as well as how to interpret dendrograms aiming to allocate the observations to each group. The nonhierarchical k-means agglomeration schedule and its differences in relation to hierarchical schedules will also be studied. Finally, we will develop a cluster analysis in an algebraic manner and by using IBM SPSS Statistics Software and Stata Statistical Software, and then interpret their results.

Keywords

Cluster analysis; Clustering; Distance or dissimilarity measures; Similarity measures; Agglomeration schedules; Hierarchical method; Nonhierarchical k-means method; Dendrogram; SPSS and Stata Software

Maybe Hamlet is right.

We could be bounded in a nutshell, but counting ourselves kings of infinite space.

Stephen Hawking

11.1 Introduction

Cluster analysis represents a set of very useful exploratory techniques that can be applied whenever we intend to verify the existence of similar behavior between observations (individuals, companies, municipalities, countries, among other examples) in relation to certain variables, and there is the intention of creating groups or clusters, in which an internal homogeneity prevails. In this regard, this set of techniques has as its main objective to allocate observations to a relatively small number of clusters that are internally homogeneous and heterogeneous between themselves, and that represent the joint behavior of the observations from certain variables. That is, the observations of a certain group must be relatively similar to one another, in relation to the variables inserted in the analysis, and significantly different from the observations found in other groups.

Clustering techniques are considered exploratory, or interdependent, since their applications do not have a predictive nature for other observations not initially present in the sample. Moreover, the inclusion of new observations into the dataset makes it necessary to reapply the modeling, so that, possibly, new clusters can be generated. Besides, the inclusion of a new variable can also generate a complete rearrangement of the observations in the groups.

Researchers can choose to develop a cluster analysis when their main goal is to sort and allocate observations to groups and, from then on, to analyze what the ideal number of clusters formed is. Or they can, a priori, define the number of groups they wish to create, based on certain criteria, and verify how the sorting and allocation of observations behave in that specific number of groups. Regardless of the objective, clustering will continue being exploratory. If a researcher aims to use a technique to, in fact, confirm the creation of groups and to make the analysis predictive, he can use techniques as, for example, discriminant analysis or multinomial logistic regression.

Elaborating a cluster analysis does not require vast knowledge of matrix algebra or statistics, different from techniques such as factor analysis and correspondence analysis. The researcher interested in applying a cluster analysis needs to, starting from the definition of the research objectives, choose a certain distance or similarity measure that will be the basis for the observations to be considered less or much closer, and a certain agglomeration schedule that will have to be defined between hierarchical and nonhierarchical methods. Therefore, he will be able to analyze, interpret, and compare the outcomes.

It is important to highlight that the outcomes obtained through hierarchical and nonhierarchical agglomeration schedules can be compared and, in this regard, the researcher is free to develop the technique, using one method or another, and to reapply it, if he deems necessary. While hierarchical schedules allow us to identify the sorting and allocation of observations, offering possibilities for researchers to study, assess, and decide the number of clusters formed in nonhierarchical schedules, we start with a known number of clusters and, from then on, we begin allocating the observations to these clusters, with a future evaluation of the representativeness of each variable when creating them. Therefore, the result of one method can serve as input to carry out the other, making the analysis cyclical. Fig. 11.1 shows the logic from which a cluster analysis can be elaborated.

Fig. 11.1
Fig. 11.1 Logic for elaborating a cluster analysis.

When choosing the distance or similarity measure and the agglomeration schedule, we must take some aspects into consideration, such as, the previously desired number of clusters, which were defined based on some resource allocation criteria, as well as certain constraints that may lead the researcher to choose a specific solution. According to Bussab et al. (1990), different criteria regarding distance measures and agglomeration schedules may lead to different cluster formations, and the homogeneity desired by the researcher fundamentally depends on the objectives set in the research.

Imagine that a researcher is interested in studying the interdependence between individuals living in a certain municipality based only on two metric variables (age, in years, and average family income, in R$). His main goal is to assess the effectiveness of social programs aimed at providing health care and then, based on these variables, to propose a still unknown number of new programs aimed at homogeneous groups of people. After collecting the data, the researcher constructed a scatter plot, as shown in Fig. 11.2.

Fig. 11.2
Fig. 11.2 Scatter plot with individuals’ Income and Age.

Based on the chart seen in Fig. 11.2, the researcher identified four clusters and highlighted them in a new chart (Fig. 11.3).

Fig. 11.3
Fig. 11.3 Highlighting the creation of four clusters.

From the creation of these clusters, the researcher decided to develop an analysis of the behavior of the observations in each group, or, more precisely, of the existing variability within the clusters and between them, so that he could clearly and consciously base his decision as regards the allocation of individuals to these four new social programs. In order to illustrate this issue, the researcher constructed the chart found in Fig. 11.4.

Fig. 11.4
Fig. 11.4 Illustrating the variability within the clusters and between them.

Based on this chart, the researcher was able to notice that the groups formed showed a lot of internal homogeneity, with a certain individual being closer to other individuals in the same group than to individuals in other groups. This is the core of cluster analysis.

If the number of social programs to be provided for the population (number of clusters) had already been given to the researcher, due to budgetary, legal, or political constraints, even so we would be able to use clustering, solely, to determine the allocation of individuals from the municipality to that number of programs (groups).

Having concluded the research and allocated the individuals to the different social, health care programs, the following year, the researcher decided to carry out the same research with individuals from the same municipality. However, in the meantime, a group of elderly billionaires decided to move to that city, and, when he constructed the new scatter plot, the researcher realized that those four clusters, clearly formed the previous year, did not exist anymore, since they fused when the billionaires were included. The new scatter plot can be seen in Fig. 11.5.

Fig. 11.5
Fig. 11.5 Rearranging the clusters due to the presence of elderly billionaires.

This new situation exemplifies the importance of always reapplying the cluster analysis whenever new observations are included (and also new variables), which deprives it from and makes its predictive power totally unfeasible, as we have already discussed.

Moreover, before elaborating any cluster analysis, this example shows that it is advisable for the researcher to study the data behavior and to check the existence of discrepant observations in relation to certain variables, since the creation of clusters is very sensitive to the presence of outliers. Excluding or retaining outliers in the dataset, however, will depend on the research objectives and on the type of data researcher have. Since, if certain observations represent anomalies in terms of variable values, when compared to the other observations, and end up forming small, insignificant, or even individual clusters, they can, in fact, be excluded. On the other hand, if these observations represent one or more relevant groups, even if they are different from the others, they must be considered in the analysis and, whenever the technique is reapplied, they can be separated so that other segmentations can be better structured in new groups, formed with higher internal homogeneity.

We would like to emphasize that cluster analysis methods are considered static procedures, since the inclusion of new observations or variables may change the clusters, thus, making it mandatory to develop a new analysis.

In this example, we realized that the original variables from which the groups are established are metric, since the clustering started from the study of the distance behavior (dissimilarity measures) between the observations. In some cases, as we will study throughout this chapter, cluster analyses can be elaborated from the similarity behavior (similarity measures) between observations that present binary variables. However, it is common for researchers to use the incorrect arbitrary weighting procedure with qualitative variables, as, for example, variables on the Likert scale, and, from then on, to apply a cluster analysis. This is a major error, since there are exploratory techniques meant exclusively for the study of the behavior of qualitative variables as, for example, the correspondence analysis.

Historically speaking, even though many distance and similarity measures date back to the end of the 19th century and the beginning of the 20th century, cluster analyses, as a better structured set of techniques, began in the field of Anthropology with Driver and Kroeber (1932), and in Psychology with Zubin (1938a,b) and Tryon (1939), as discussed by Reis (2001) and Fávero et al. (2009). With the acknowledgment that observation clustering and classification procedures are scientific methods, together with astonishing technological developments, mainly verified after the 1960s, cluster analyses started being used more frequently after Sokal and Sneath’s (1963) relevant work was published, in which procedures are carried out to compare the biological similarities of organisms with similar characteristics and the respective species.

Currently, cluster analysis offers several application possibilities in the fields of consumer behavior, market segmentation, strategy, political science, economics, finance, accounting, actuarial science, engineering, logistics, computer science, education, medicine, biology, genetics, biostatistics, psychology, anthropology, demography, geography, ecology, climatology, geology, archeology, criminology and forensics, among others.

In this chapter, we will discuss cluster analysis techniques, aiming at: (1) introducing the concepts; (2) presenting the step by step of modeling, in an algebraic and practical way; (3) interpreting the results obtained; and (4) applying the technique in SPSS and in Stata. Following the logic proposed in the book, first, we will present the algebraic solution of an example jointly with the presentation of the concepts. Only after the introduction of concepts will the procedures for elaborating the techniques in SPSS and Stata be presented.

11.2 Cluster Analysis

Many are the procedures for elaborating a cluster analysis, since there are different distance or similarity measures for metric or binary variables, respectively. Besides, after defining the distance or similarity measure, the researcher still needs to determine, among several possibilities, the observation clustering method, from certain hierarchical or nonhierarchical criteria. Therefore, when one wishes to group observations in internally homogeneous clusters, what initially seems trivial can become quite complex, because there are multiple combinations between different distance or similarity measures and clustering methods. Hence, based on the underlying theory and on his research objectives, as well as on his experience and intuition, it is extremely important for the researcher to define the criteria from which the observations will be allocated to each one of the groups.

In the following sections, we will discuss the theoretical development of the technique, along with a practical example. In Sections 11.2.1 and 11.2.2, the concepts of distance and similarity measures and clustering methods are presented and discussed, respectively, always followed by the algebraic solutions developed from a dataset.

11.2.1 Defining Distance or Similarity Measures in Cluster Analysis

As we have already discussed, the first phase for elaborating a cluster analysis consists in defining the distance (dissimilarity) or similarity measure that will be the basis for each observation to be allocated to a certain group.

Distance measures are frequently used when the variables in the dataset are essentially metric, since, the greater the differences between the variable values of two observations the smaller the similarity between them or, in other words, the higher the dissimilarity.

On the other hand, similarity measures are often used when the variables are binary, and what most interests us is the frequency of converging answer pairs 1-1 or 0-0 of two observations. In this case, the greater the frequency of converging pairs, the higher the similarity between the observations.

An exception to this rule is Pearson’s correlation coefficient between two observations, calculated from metric variables, however, with similarity characteristics, as we will see in the following section.

We will study the dissimilarity measures for metric variables in Section 11.2.1.1 and, in Section 11.2.1.2, we will discuss the similarity measures for binary variables.

11.2.1.1 Distance (Dissimilarity) Measures Between Observations for Metric Variables

As a hypothetical situation, imagine that we intend to calculate the distance between two observations i (i = 1, 2) from a dataset that has three metric variables (X1i, X2i, X3i), with values in the same unit of measure. These data can be found in Table 11.1.

Table 11.1

Part of a Dataset With Two Observations and Three Metric Variables
Observation iX1iX2iX3i
13.72.79.1
27.88.01.5

Table 11.1

It is possible to illustrate the configuration of both observations in a three-dimensional space from these data, since we have exactly three variables. Fig. 11.6 shows the relative position of each observation, emphasizing the distance between them (d12).

Fig. 11.6
Fig. 11.6 Three-dimensional scatter plot for the hypothetical situation with two observations and three variables.

Distance d12, which is a dissimilarity measure, can be easily calculated by using, for instance, its projection over the horizontal plane formed by axes X1 and X2, called distance d12, as shown in Fig. 11.7.

Fig. 11.7
Fig. 11.7 Three-dimensional chart highlighting the projection of d12 over the horizontal plane.

Thus, based on the well-known Pythagorean distance formula for right-angled triangles, we can determine d12 through the following expression:

d12=(d12)2+(X31X32)2

si92_e  (11.1)

where | X31 − X32 | is the distance of the vertical projections (axis X3) from points 1 and 2.

However, distance d12 is unknown to us, so, once again, we need to use the Pythagorean formula, now using the distances of the projections from Points 1 and 2 over the other two axes (X1 and X2), as shown in Fig. 11.8.

Fig. 11.8
Fig. 11.8 Projection of the points over the plane formed by X1 and X2 with emphasis on d´12.

Thus, we can say that:

d12=(X11X12)2+(X21X22)2

si93_e  (11.2)

and, substituting (2) in (1), we have:

d12=(X11X12)2+(X21X22)2+(X31X32)2,

si94_e  (11.3)

which is the expression of distance (dissimilarity measure) between Points 1 and 2, also known as the Euclidean distance formula.

Therefore, for the data in our example, we have:

d12=(3.77.8)2+(2.78.0)2+(9.11.5)2=10.132

si95_e

whose unit of measure is the same as for the original variables in the dataset. It is important to highlight that, if the variables do not have the same unit of measure, a data standardization procedure will have to be carried out previously, as we will discuss later.

We can generalize this problem for a situation in which the dataset has n observations and, for each observation i (i = 1, ..., n), values corresponding to each one of the j (j = 1, ..., k) metric variables X, as shown in Table 11.2.

Table 11.2

General Model of a Dataset for Elaborating the Cluster Analysis
Variable j
Observation iX1iX2iXki
1X11X21Xk1
2X12X22Xk2
PX1pX2pXkp
qX1qX2qXkq
nX1nX2nXkn

Table 11.2

So, Expression (11.4), based on Expression (11.3), presents the general definition of the Euclidian distance between any two observations p and q.

dpq=(X1pX1q)2+(X2pX2q)2++(XkpXkq)2=kj=1(XjpXjq)2

si96_e  (11.4)

Although the Euclidian distance is the most commonly used in cluster analyses, there are other dissimilarity measures that can be used, and using each one of them will depend on the researcher’s assumptions and objectives. Next, we will discuss other dissimilarity measures that can be used:

  •  Euclidean squared distance: instead of the Euclidian distance, it can be used when the variables show a small dispersion in value, and the use of the squared Euclidian distance makes it easier to interpret the outputs of the analysis and the allocation of the observations to the groups. Its expression is given by:

dpq=(X1pX1q)2+(X2pX2q)2++(XkpXkq)2=kj=1(XjpXjq)2

si97_e  (11.5)

  •  Minkowski Distance: it is the most general dissimilarity measure expression from which others derive. It is given by:

dpq=[kj=1(|XjpXjq|)m]1m

si98_e  (11.6)

where m takes on positive integer values (m = 1, 2, ...). We can see that the Euclidian distance is a particular case of the Minkowski distance, when m = 2.

  •  Manhattan Distance: also referred to as the absolute or city block distance, it does not consider the triangular geometry that is inherent to Pythagoras’ initial expression and only considers the differences between the values of each variable. Its expression, also a particular case of the Minkowski distance when m = 1, is given by:

dpq=kj=1|XjpXjq|

si99_e  (11.7)

  •  Chebyshev Distance: also referred to as infinite or maximum distance, it is a particular case of the Manhattan distance because it only considers, for two observations, the maximum difference between all the j variables being studied. Its expression is given by:

dpq=max|XjpXjq|

si100_e  (11.8)

It is a particular case of the Minkowski distance as well, when m = ∞.

  •  Canberra Distance: used for the cases in which the variables only have positive values, it assumes values between 0 and j (number of variables). Its expression is given by:

dpq=kj=1|XjpXjq|(Xjp+Xjq)

si101_e  (11.9)

Whenever there are metric variables, the researcher can also use Pearson’s correlation, which, even though, is not a dissimilarity measure (in fact, it is a similarity measure), can provide important information when the aim is to group rows from the dataset. Pearson’s correlation expression, between the values of any two observations p and q, based on Expression (4.11) presented in Chapter 4, can be written as follows:

ρpq=kj=1(XjpˉXp)(XjqˉXq)kj=1(XjpˉXp)2kj=1(XjqˉXq)2

si102_e  (11.10)

where ˉXpsi103_e and ˉXqsi104_e represent the mean of all variable values for observations p and q, respectively, that is, the mean of each one of the rows in the dataset.

Therefore, we can see that we are dealing with a coefficient of correlation between rows and not between columns (variables). It is the most common in data analysis and its values vary between − 1 and 1. Pearson’s correlation coefficient can be used as a similarity measure between the rows of the dataset in analyses that include time series, for example, that is, cases in which the observations represent periods. In this case, the researcher may intend to study the correlations between different periods, to investigate, for instance, a possible recurrence of behavior in the same row for the set of variables, which may cause certain periods, not necessarily subsequent ones, to be grouped by similarity of behavior.

Going back to the data presented in Table 11.1, we can calculate the different distance measures between observations 1 and 2, given by Expressions (11.4)(11.9), as well as the correlational similarity measure, given by Expression (11.10). Table 11.3 shows these calculations and the respective results.

Table 11.3

Distance and Correlational Similarity Measures Between Observations 1 and 2
Observation iX1iX2iX3iMean
13.72.79.15.167
27.88.01.55.767
Euclidian Distance
d12=(3.77.8)2+(2.78.0)2+(9.11.5)2=10.132si1_e
Squared Euclidean Distance
d12 = (3.7 − 7.8)2 + (2.7 − 8.0)2 + (9.1 − 1.5)2 = 102.660
Manhattan Distance
d12 = | 3.7 − 7.8 | + | 2.7 − 8.0 | + | 9.1 − 1.5 | = 17.000
Chebyshev Distance
d12 = | 9.1 − 1.5 | = 7.600
Canberra Distance
d12=|3.77.8|(3.7+7.8)+|2.78.0|(2.7+8.0)+|9.11.5|(9.1+1.5)=1.569si2_e
Pearson’s Correlation (Similarity)
ρ12=(3.75.167)(7.85.767)+(2.75.167)(8.05.767)+(9.15.167)(1.55.767)(3.75.167)2+(2.75.167)2+(9.15.167)2(7.85.767)2+(8.05.767)2+(1.55.767)2=0.993si3_e

Table 11.3

Based on the results shown in Table 11.3, we can see that different measures produce different results, which may cause the observations to be allocated to different homogeneous clusters, depending on which measure was chosen for the analysis, as discussed by Vicini and Souza (2005) and Malhotra (2012). Therefore, it is essential for the researcher to always underpin his choice and to bear in mind the reasons why he decided to use a certain measure, instead of others. Simply using more than one measure, when analyzing the same dataset, can support this decision, since, in this case, the results can be compared.

This becomes really clear when we include a third observation in the analysis, as shown in Table 11.4.

Table 11.4

Part of the Dataset With Three Observations and Three Metric Variables
Observation iX1iX2iX3i
13.72.79.1
27.88.01.5
38.91.02.7

Table 11.4

While the Euclidian distance suggests that the most similar observations (the shortest distance) are 2 and 3, when we use the Chebyshev distance, observations 1 and 3 are the most similar. Table 11.5 shows these distances for each pair of observations, highlighting, in bold characters, the smallest value of each distance.

Table 11.5

Euclidian and Chebyshev Distances Between the Pairs of Observations Seen in Table 11.4
DistancePair of Observations 1 and 2Pair of Observations 1 and 3Pair of Observations 2 and 3
Euclidiand12 = 10.132d13 = 8.420d23 = 7.187
Chebyshevd12 = 7.600d13 = 6.400d23 = 7.000

Table 11.5

Hence, in a certain cluster schedule, and only due to the dissimilarity measure chosen, we would have different initial clusters.

Besides deciding which distance measure to choose, the researcher also has to verify if the data need to be treated previously. So far, in the examples we have already discussed, we were careful to choose metric variables with values in the same unit of measure (as, for example, students’ grades in Math, Physics, and Chemistry, which vary from 0 the 10). However, if the variables are measured in different units (as, for example, income in R$, educational level in years of study, and number of children), the intensity of the distances between the observations may be arbitrarily influenced by the variables that will possibly present greater magnitude in their values, to the detriment the others. In these situations, the researcher must standardize the data, so that the arbitrary nature of the measurement units may be eliminated, making each variable have the same contribution over the distance measure considered.

Z-scores procedure is the most frequently used method to standardize variables. In it, for each observation i, the value of a new standardized variable ZXj is obtained by subtracting the corresponding original variable value Xj from its mean and, after that, the resulting value is divided by its standard deviation, as presented in Expression (11.11).

ZXji=XjiˉXjsj

si105_e  (11.11)

where ˉXsi106_e and s represent the mean and the standard deviation of variable Xj. Hence, regardless of the magnitude of the values and of the type of measurement units of the original variables in a dataset, all the respective variables standardized by the Z-scores procedure will have a mean equal to zero and a standard deviation equal to 1, which ensures that possible arbitrary measurement units over the distance between each pair of observations will be eliminated. In addition, Z-scores have the advantage of not changing the distribution of the original variable.

Therefore, if the original variables are different units, distance measure Expressions (11.4)(11.9) must have the terms Xjp and Xjq, respectively, substituted for ZXjp and ZXjq. Table 11.6 presents these expressions, based on the standardized variables.

Table 11.6

Distance Measure Expressions With Standardized Variables
Distance Measure (Dissimilarity)Expression
Euclidiandpq=kj=1(ZXjpZXjq)2si4_e
Squared Euclideandpq=kj=1(ZXjpZXjq)2si5_e
Minkowskidpq=[kj=1(|ZXjpZXjq|)m]1msi6_e
Manhattandpq=kj=1|ZXjpZXjq|si7_e
Chebyshevdpq = máx | ZXjp − ZXjq |
Canberradpq=kj=1|ZXjpZXjq|(ZXjp+ZXjq)si8_e

Even though Pearson’s correlation is not a dissimilarity measure (in fact, it is a similarity measure), it is important to mention that its use also requires that the variables be standardized by using the Z-scores procedure in case they do not have the same measurement units. If the main goal were to group variables, which is the main goal of the following chapter (factor analysis), the standardization of variables through the Z-scores procedure would, in fact, be irrelevant, given that the analysis would consist in assessing the correlation between columns of the dataset. On the other hand, as the objective of this chapter is to group rows from the dataset that represent the observations, the standardization of the variables is necessary for elaborating an accurate cluster analysis.

11.2.1.2 Similarity Measures Between Observations for Binary Variables

Now, imagine that we intend to calculate the distance between two observations i (i = 1, 2) coming from a dataset that has seven variables (X1i, ..., X7i), however, all of them related to the presence or absence of characteristics. In this situation, it is common for the presence or absence of a certain characteristic to be represented by a binary variable, or a dummy, which assumes value 1, in case the characteristic occurs, and 0, if otherwise. These data can be found in Table 11.7.

Table 11.7

Part of the Dataset With Two Observations and Seven Binary Variables
Observation iX1iX2iX3iX4iX5iX6iX7i
10011011
20111101

Table 11.7

It is important to highlight that the use of binary variables does not generate arbitrary weighting problems resulting from the variable categories, contrary to what would happen if discrete values (1, 2, 3, ...) were assigned to each category of each qualitative variable. In this regard, if a certain qualitative variable has k categories, (k − 1) binary variables will be necessary to represent the presence or absence of each one of the categories. Thus, all the binary variables will be equal to 0 in case the reference category occurs.

Therefore, by using Expression (11.4), we can calculate the squared Euclidean distance between observations 1 and 2, as follows:

d12=7j=1(Xj1Xj2)2=(00)2+(01)2+(11)2+(11)2+(01)2+(10)2+(11)2=3,

si107_e

which represents the total number of variables with answer differences between observations 1 and 2.

Therefore, for any two observations p and q, the greater the number of equal answers (0-0 or 1-1), the shorter the squared Euclidean distance between them will be, since:

(XjpXjq)2={0ifXjp=Xjq={011ifXjpXjq

si108_e  (11.12)

As discussed by Johnson and Wichern (2007), each stretch of the distance represented by Expression (11.12) is considered to be a dissimilarity measure, since the greater the number of answer discrepancies, the greater the squared Euclidean distances. On the other hand, the calculations equally ponder the pairs of answers 0-0 and 1-1, without giving higher relative importance to the pair of answers 1-1 that, in many cases, is a stronger similarity indicator than the pair of answers 0-0. For example, when we group people, the fact that two of them eat lobster every day is a stronger similarity evidence than the absence of this characteristic for both.

Hence, many authors, aiming at defining similarity measures between observations, proposed the use of coefficients that would take the similarity of the answers 1-1 and 0-0 into consideration, and these pairs would not necessarily have the same relative importance. In order for us to be able to present these measures, it is necessary to construct an absolute frequency table of answers 0 and 1 for each pair of observations p and q, as shown in Table 11.8.

Table 11.8

Absolute Frequencies of Answers 0 and 1 for Two Observations p and q
Observation p
Observation q10Total
1aba + b
0cdc + d
Totala + cb + da + b + c + d

Table 11.8

Next, based on this table, we will discuss the main similarity measures, bearing in mind that the use of each one depends on the researcher’s assumptions and objectives.

  •  Simple matching coefficient (SMC): it is the most frequently used similarity measure for binary variables, and it is discussed and used by Zubin (1938a), and by Sokal and Michener (1958). This coefficient, which matches the weights of the converging 1-1 and 0-0 answers, has its expression given by:

spq=a+da+b+c+d

si109_e  (11.13)

  •  Jaccard index: even though it was first proposed by Gilbert (1884), it received this name because it was discussed and used in two extremely important papers developed by Jaccard (1901, 1908). This measure, also known as Jaccard similarity coefficient, does not take the frequency of the pair of answers 0-0 into consideration, which is considered irrelevant. However, it is possible to come across a situation in which all the variables are equal to 0 for two observations, that is, there is only frequency in cell d of Table 11.8. In this case, software packages such as Stata present the Jaccard index equal to 1, which makes sense from a similarity standpoint. Its expression is given by:

spq=aa+b+c

si110_e  (11.14)

  •  Dice similarity coefficient (DSC): although it is only known by this name, it was suggested and discussed by Czekanowski (1932), Dice (1945) and Sørensen (1948). It is similar to the Jaccard index; however, it doubles the weight over the frequency of converging type 1-1 answer pairs. Just as in that case, software such as Stata present the Dice coefficient equal to 1, for the cases in which all the variables are equal to 0 for two observations, thus, avoiding any uncertainty in the calculation. Its expression is given by:

spq=2a2a+b+c

si111_e  (11.15)

  •  anti-Dice similarity coefficient: it was initially proposed by Sokal and Sneath (1963) and Anderberg (1973), the name anti-Dice comes from the fact that this coefficient doubles the weight over the frequencies of different type 1-1 answer pairs, that is, it doubles the weight over the answer divergences. Just as the Jaccard and the Dice coefficients, the anti-Dice coefficient also ignores the frequency of 0-0 answer pairs. Its expression is given by:

spq=aa+2(b+c)

si112_e  (11.16)

  •  Russel and Rao similarity coefficient: it is also widely used and it only favors the similarities of 1-1 answers in the calculation of its coefficient. It was proposed by Russell and Rao (1940), and its expression is given by:

spq=aa+b+c+d

si113_e  (11.17)

  •  Ochiai similarity coefficient: even though it is known by this name, it was initially proposed by Driver and Kroeber (1932), and, later on, it was used by Ochiai (1957). This coefficient is undefined when one or both observations being studied present all the variable values equal to 0. However, if both vectors present all the values equal to 0, software such as Stata present the Ochiai coefficient equal to 1. If this happens for only one of the two vectors, the Ochiai coefficient is considered equal to 0. Its expression is given by:

spq=a(a+b)(a+c)

si114_e  (11.18)

  •  Yule similarity coefficient: proposed by Yule (1900) and used by Yule and Kendall (1950), this similarity coefficient for binary variables offers as an answer a coefficient that varies from − 1 to 1. As we can see, through its expression presented, the coefficient generated is undefined if one or both vectors compared present all the values equal to 0 or 1. Software such as Stata generate the Yule coefficient equal to 1, if b = c = 0 (a total convergence of answers), and equal to − 1, if a = d = 0 (a total divergence of answers).

spq=adbcad+bc

si115_e  (11.19)

  •  Rogers and Tanimoto similarity coefficient: this coefficient, which doubles the weight of discrepant answers 0-1 and 1-0 in relation to the weight of the combinations of converging type 1-1 and 0-0 answers, was initially proposed by Rogers and Tanimoto (1960). Its expression, which becomes equal to the anti-Dice coefficient when the frequency of 0-0 answers is equal to 0 (d = 0), is given by:

spq=a+da+d+2(b+c)

si116_e  (11.20)

  •  Sneath and Sokal similarity coefficient: different from the Rogers and Tanimoto coefficient, this coefficient, proposed by Sneath and Sokal (1962), doubles the weight of converging type 1-1 and 0-0 answers in relation to the other answer combinations (1-0 and 0-1). Its expression, which becomes equal to the Dice coefficient when the frequency of type 0-0 answers is equal to 0 (d = 0), is given by:

spq=2(a+d)2(a+d)+b+c

si117_e  (11.21)

  •  Hamann similarity coefficient: Hamann (1961) proposed this similarity coefficient for binary variables aiming at having the frequencies of discrepant answers (1-0 and 0-1) subtracted from the total of converging answers (1-1 and 0-0). This coefficient, which varies from − 1 (total answer divergence) to 1 (total answer convergence), is equal to two times the simple matching coefficient minus 1. Its expression is given by:

spq=(a+d)(b+c)a+b+c+d

si118_e  (11.22)

As was discussed in Section 11.2.1.1 as regards the dissimilarity measures applied to metric variables, let’s go back to the data presented in Table 11.7, aiming at calculating the different similarity measures between observations 1 and 2, which only have binary variables. In order to do that, from that table, we must construct the absolute frequency table of answers 0 and 1 for the observations mentioned (Table 11.9).

Table 11.9

Absolute Frequencies of Answers 0 and 1 for Observations 1 and 2
Observation 1
Observation 210Total
1325
0112
Total437

Table 11.9

So, using Expressions (11.13)(11.22), we are able to calculate the similarity measures themselves. Table 11.10 presents the calculations and the results of each coefficient.

Table 11.10

Similarity Measures Between Observations 1 and 2
Simple Matching:Jaccard:
s12=3+17=0.571si9_es12=36=0.500si10_e
Dice:Anti-Dice:
s12=2(3)2(3)+2+1=0.667si11_es12=33+2(2+1)=0.333si12_e
Russell and Rao:Ochiai:
s12=37=0.429si13_es12=3(3+2)(3+1)=0.671si14_e
Yule:Rogers and Tanimoto:
s12=312131+21=0.200si15_es12=3+13+1+2(2+1)=0.400si16_e
Sneath and Sokal:Hamann:
s12=2(3+1)2(3+1)+2+1=0.727si17_es12=(3+1)(2+1)7=0.143si18_e

Analogous to what was discussed when the dissimilarity measures were calculated, we can clearly see that different similarity measures generate different results, which may cause, when defining the cluster method, the observations to be allocated to different homogeneous clusters, depending on which measure was chosen for the analysis.

Bear in mind that it does not make any sense to apply the Z-scores standardization procedure to calculate the similarity measures discussed in this section, since the variables used for the cluster analysis are binary.

At this moment, it is important to emphasize that, instead of using similarity measures to define the clusters whenever there are binary variables, it is very common to define clusters from the coordinates of each observation, which can be generated when elaborating simple or multiple correspondence analyses, for instance. This is an exploratory technique applied solely to datasets that have qualitative variables, aiming at creating perceptual maps, which are constructed based on the frequency of the categories of each one of the variables in analysis (Fávero and Belfiore, 2017).

After defining the coefficient that will be used, based on the research objectives, on the underlying theory, and on his experience and intuition, the researcher must move on to the definition of the cluster schedule. The main cluster analysis schedules will be studied in the following section.

11.2.2 Agglomeration Schedules in Cluster Analysis

As discussed by Vicini and Souza (2005) and Johnson and Wichern (2007), in cluster analysis, choosing the clustering method, also known as agglomeration schedule, is as important as defining the distance (or similarity) measure, and this decision must also be made based on what researchers intends to do in terms of their research objectives.

Basically, agglomeration schedules can be classified into two types, hierarchicals and nonhierarchicals. While the former characterize themselves for favoring a hierarchical structure (step by step) when forming clusters, nonhierarchical schedules use algorithms to maximize the homogeneity within each cluster, without going through a hierarchical process for such.

Hierarchical agglomeration schedules can be clustering or partitioning, depending on how the process starts. If all the observations are considered to be separated and, from their distances (or similarities), groups are formed until we reach a final stage with only one cluster, then this process is known as clustering. Among all hierarchical agglomeration schedules, the most commonly used are those that have the following linkage methods: nearest-neighbor or single-linkage, furthest-neighbor or complete-linkage, or between-groups or average-linkage. On the other hand, if all the observations are considered grouped and, stage after stage, smaller groups are formed by the separation of each observation, until these subdivisions generate individual groups (that is, totally separated observations), then, we have a partitioning process.

Conversely, nonhierarchical agglomeration schedules, among which the most popular is the k-means procedure, refer to processes in which clustering centers are defined, and from which the observations are allocated based on their proximity to them. Different from hierarchical schedules, in which the researcher can study the several possibilities for allocating observations and even define the ideal number of clusters based on each one of the grouping stages, a nonhierarchical agglomeration schedule requires that we previously stipulate the number of clusters from which the clustering centers will be defined and the observations allocated. That is why we recommend the generation of a hierarchical agglomeration schedule before constructing a nonhierarchical schedule, when there is no reasonable estimate of the number of clusters that can be formed from the observations in the dataset and based on the variables in study.

Fig. 11.9 shows the logic of agglomeration schedules in cluster analysis.

Fig. 11.9
Fig. 11.9 Agglomeration schedules in cluster analysis.

We will study hierarchical agglomeration schedules in Section 11.2.2.1, and Section 11.2.2.2 will be used to discuss the nonhierarchical k-means agglomeration schedule.

11.2.2.1 Hierarchical Agglomeration Schedules

In this section, we will discuss the main hierarchical agglomeration schedules, in which larger and larger clusters are formed at each clustering stage because new observations or groups are added to it, due to a certain criterion (linkage method) and based on the distance measure chosen. In Section 11.2.2.1.1, the main concepts of these schedules will be presented, and, in Section 11.2.2.1.2, a practical example will be presented and solved algebraically.

11.2.2.1.1 Notation

There are three main linkage methods in hierarchical agglomeration schedules, as shown in Fig. 11.9: the nearest-neighbor or single-linkage, the furthest-neighbor or complete-linkage, and the between-groups or average-linkage.

Table 11.11 illustrates the distance to be considered in each clustering stage, based on the linkage method chosen.

Table 11.11

Distance to be Considered Based on the Linkage Method
Linkage MethodIllustrationDistance (Dissimilarity)
Single
(Nearest-Neighbor or
Single-Linkage)
t11-01-9780128112168d23
Complete
(Furthest-Neighbor or
Complete-Linkage)
t11-02-9780128112168d15
Average
(Between-Groups
or
Average-Linkage)
t11-03-9780128112168d13+d14+d15+d23+d24+d256si19_e

The single-linkage method favors the shortest distances (thus, the nomenclature nearest neighbor) so that new clusters can be formed at each clustering stage through the incorporation of observations or groups. Therefore, applying it is advisable in cases in which the observations are relatively far apart, that is, different, and we would like to form clusters considering a minimum of homogeneity. On the other hand, its analysis may be hampered when there are observations or clusters just a little farther apart from each other, as shown in Fig. 11.10.

Fig. 11.10
Fig. 11.10 Single-linkage method—Hampered analysis when there are observations or clusters just a little further apart.

The complete-linkage method, on the other hand, goes in the opposite direction, that is, it favors the greatest distances between the observations or groups so that new clusters can be formed (hence, the name furthest neighbor) and, in this regard, using it is advisable in cases in which there is no considerable distance between the observations, and the researcher needs to identify the heterogeneities between them.

Finally, in the average-linkage method, two groups merge based on the average distance between all the pairs of observations that are in these groups (hence, the name average linkage). Accordingly, even though there are changes in the calculation of the distance measures between the clusters, the average-linkage method ends up preserving the order of the observations in each group, offered by the single-linkage method, in case there is a considerable distance between the observations. The same happens with the sorting solution provided by the complete-linkage method, if the observations are very close to each other.

Johnson and Wichern (2007) proposed a logical sequence of steps in order to facilitate the understanding of a cluster analysis, elaborated through a certain hierarchical agglomerative method:

  1. 1. If n is the number of observations in a dataset, we must start the agglomeration schedule with exactly n individual groups (stage 0), such that we will initially have a distances (or similarities) matrix D0 formed by the distances between each pair of observations.
  2. 2. In the first stage, we must choose the smallest distance among all of those that form matrix D0, that is, the one that connects the two most similar observations. At this exact moment, we will not have n individual groups any longer, we will have (n − 1) groups, and one of them is formed by two observations.
  3. 3. In the following clustering stage, we must repeat the previous stage. However, we now have to take the distance between each pair of observations, and between the first group already formed, and each one of the other observations into consideration, based on one of the linkage methods adopted. In other words, we will have, after the first clustering stage, matrix D1 with dimensions (n − 1) × (n − 1), in which one of the rows will be represented by the first grouped pair of observations. Consequently, in the second stage, a new group will be formed by the grouping of two new observations or by adding a certain observation to the first group previously formed in the first stage.
  4. 4. The previous process must be repeated (n − 1) times, until there is only a single group formed by all the observations. In other words, in the stage (n − 2) we will have matrix Dn-2 that will only contain the distance between the last two remaining groups, before the final fusion.
  5. 5. Finally, from the clustering stages and the distances between the clusters formed, it is possible to develop a tree-shaped diagram that summarizes the clustering process, and explains the allocation of each observation in each cluster. This diagram is known as a dendrogram or a phenogram.

Therefore, the values that form the D matrices of each one of the stages will be a function of the distance measure chosen and of the linkage method adopted. In a certain clustering stage s, imagine that a researcher groups two clusters M and N formed previously, containing observations m and n, respectively, so that cluster MN can be formed. Next, he intends to group MN with another cluster W, with w observations. Since we know that the decision to choose the next cluster will always be the smallest distance between each pair of observations or groups in the hierarchical agglomerative methods, the agglomeration schedule will be essential in order for the distances that will form each matrix Ds to be analyzed. Using this logic and based on Table 11.11, let’s discuss the criterion to calculate the distance between the clusters MN and W, inserted in matrix Ds, based on the linkage method:

  •  Nearest-Neighbor or Single-Linkage Method:

d(MN)W=min{dMW;dNW}

si119_e  (11.23)

where dMW and dNW are the distances between the closest observations in clusters M and W and in clusters N and W, respectively.

  •  Furthest-Neighbor or Complete-Linkage Method:

d(MN)W=max{dMW;dNW}

si120_e  (11.24)

where dMW and dNW are the distances between the farthest observations in clusters M and W and in clusters N and W, respectively.

  •  Between-Groups or Average-Linkage Method:

d(MN)W=m+np=1wq=1dpq(m+n)(w)

si121_e  (11.25)

where dpq represents the distance between any observation p in cluster MN and any observation q in cluster W, and m + n and w represent the number of observations in clusters MN and W, respectively.

In the following section, we will present a practical example that will be solved algebraically, and from which the concepts of hierarchical agglomerative methods will be established.

11.2.2.1.2 A Practical Example of Cluster Analysis With Hierarchical Agglomeration Schedules

Imagine that a college professor, who is very concerned about his students’ capacity to learn the subject he teaches, Quantitative Methods, is interested in allocating them to groups with the highest homogeneity possible, based on the grades they obtained on the college entrance exams in subjects considered quantitative (Math, Physics, and Chemistry).

In order to do that, the professor collected information on these grades, which vary from 0 to 10. In addition, since he will carry out a cluster analysis, first, in an algebraic way, he decided, for pedagogical purposes, to only work with five students. This dataset can be seen in Table 11.12.

Table 11.12

Example: Grades in Math, Physics, and Chemistry on the College Entrance Exam
Student
(Observation)
Grade in Mathematics
(X1i)
Grade in Physics
(X2i)
Grade in Chemistry
(X3i)
Gabriela3.72.79.1
Luiz Felipe7.88.01.5
Patricia8.91.02.7
Ovidio7.01.09.0
Leonor3.42.05.0

Table 11.12

Based on the data obtained, the chart in Fig. 11.11 is constructed, and, since the variables are metric, the dissimilarity measure known as Euclidian distance will be used for the cluster analysis. Besides, since all the variables have values in the same unit of 0 measure (grades from 0 to 10), in this case, it will not be necessary to standardize them through Z-scores.

Fig. 11.11
Fig. 11.11 Three-dimensional chart with the relative position of the five students.

In the following sections, hierarchical agglomeration schedules based on the Euclidian distance will be elaborated through the three linkage methods being studied.

11.2.2.1.2.1 Nearest-Neighbor or Single-Linkage Method

At this moment, from the data presented in Table 11.12, let’s develop a cluster analysis through a hierarchical agglomeration schedule with the single-linkage method. First of all, we define matrix D0, formed by the Euclidian distances (dissimilarities) between each pair of observations, as follows:

Unlabelled Image

It is important to mention that at this initial moment each observation is considered an individual cluster, that is, in stage 0, we have 5 clusters (sample size). Highlighted in matrix D0 is the smallest distance between all the observations and, therefore, in the first stage, observations Gabriela and Ovidio will initially be grouped, and will now be a new cluster.

We must construct matrix D1 so that we can go to the next clustering stage, in which the distances between the cluster Gabriela-Ovidio and the other observations are calculated. Observations that are still isolated. Thus, by using the single-linkage method and based on the Expression (11.23), we have:

d(GabrielaOvidio)Luiz Felipe=min{10.132;10.290}=10.132

si122_e

d(GabrielaOvídio)Patricia=min{8.420;6.580}=6.580

si123_e

d(GabrielaOvídio)Leonor=min{4.170;5.474}=4.170

si124_e

Matrix D1 can be seen:

Unlabelled Image

In the same way, in matrix D1 the smallest distance between all of them is highlighted. Therefore, in the second stage, observation Leonor is inserted into the already-formed cluster Gabriela-Ovidio. Observations Luiz Felipe and Patricia still remain isolated.

We must construct matrix D2 so that we can take the next step, in which the distances between the cluster Gabriela-Ovidio-Leonor and the two remaining observations are calculated. Analogously, we have:

d(GabrielaOvidioLeonor)Luiz Felipe=min{10.132;8.223}=8.223

si125_e

d(GabrielaOvidioLeonor)Patricia=min{6.580;6.045}=6.045

si126_e

Matrix D2 can be written as:

Unlabelled Image

In the third clustering stage, observation Patricia is incorporated into the cluster Gabriela-Ovidio-Leonor, since the corresponding distance is the smallest among all the ones presented in matrix D2. Therefore, we can write matrix D3, which comes next, taking into consideration the following criterion:

d(GabrielaOvidioLeonorPatricia)Luiz Felipe=min{8.223;7.187}=7.187

si127_e

Unlabelled Image

Finally, in the fourth and last stage, all the observations are allocated to the same cluster, thus, concluding the hierarchical process. Table 11.13 presents a summary of this agglomeration schedule constructed by using the single-linkage method.

Table 11.13

Agglomeration Schedule Through the Single-Linkage Method
StageClusterGrouped ObservationSmallest Euclidian Distance
1GabrielaOvidio3.713
2Gabriela-OvidioLeonor4.170
3Gabriela-Ovidio-LeonorPatricia6.045
4Gabriela-Ovidio-Leonor-PatriciaLuiz Felipe7.187

Table 11.13

Based on this agglomeration schedule, we can construct a tree-shaped diagram, known as a dendrogram or phenogram, whose main objective is to illustrate the step by step of the clusters and to facilitate the visualization of how each observation is allocated to each stage. The dendrogram can be seen in Fig. 11.12.

Fig. 11.12
Fig. 11.12 Dendrogram—Single-linkage method.

Through Figs. 11.13 and 11.14, we are able to interpret the dendrogram constructed.

Fig. 11.13
Fig. 11.13 Interpreting the dendrogram—Number of clusters and allocation of observations.
Fig. 11.14
Fig. 11.14 Interpreting the dendrogram—Distance leaps.

First of all, we drew three lines (I, II, and III) that are orthogonal to the dendrogram lines, as shown in Fig. 11.13, which allow us to identify the number of clusters in each clustering stage, as well as the observations in each cluster.

Therefore, line I “cuts” the dendrogram immediately after the first clustering stage and, at this moment, we can verify that there are four clusters (four intersections with the dendrogram’s horizontal lines), one of them formed by observations Gabriela and Ovidio, and the others, by the individual observations.

On the other hand, line II intersects three horizontal lines of the dendrogram, which means that, after the second stage, in which observation Leonor was incorporated into the already formed cluster Gabriela-Ovidio, there are three clusters.

Finally, line III is drawn immediately after the third stage, in which observation Patricia merges with the cluster Gabriela-Ovidio-Leonor. Since two intersections between this line and the dendrogram’s horizontal lines are identified, we can see that observation Luiz Felipe remains isolated, while the others form a single cluster.

Besides providing a study of the number of clusters in each clustering stage and of the allocation of observations, a dendrogram also allows the researcher to analyze the magnitude of the distance leaps in order to establish the clusters. A high magnitude leap, in comparison to the others, can indicate that a certain observation or cluster, a considerably different one, is incorporated into already formed clusters, which offers subsidies for the establishment of a solution regarding the number of clusters without the need for a next clustering stage.

Although we know that setting an inflexible, mandatory number of clusters may hamper the analysis, at least giving an idea of this number, given the distance measure used and the linkage method adopted, may help researchers better understand the characteristics of the observations that led to this fact. Moreover, since the number of clusters is important for constructing nonhierarchical agglomeration schedules, this piece of information (considered an output of the hierarchical schedule) may serve as input for the k-means procedure.

Fig. 11.14 presents three distance leaps (A, B, and C), regarding each one of the clustering stages, and, from their analysis, we can see that leap B, which represents the incorporation of observation Patricia into the cluster that had already been formed Gabriela-Ovidio-Leonor, is the greatest of the three. Therefore, in case we intend to set the ideal number of clusters in this example, the researcher may choose the solution with three clusters (line II in Fig. 11.13), without the stage in which observation Patricia is incorporated, since it possibly has characteristics that are not so homogeneous and that make it unfeasible to include it in the previously formed cluster, given the large distance leap. Thus, in this case, we would have a cluster formed by Gabriela, Ovidio, and Leonor, another one formed only by Patricia, and a third one formed only by Luiz Felipe.

When using dissimilarity measures in methods clustering, a very useful criterion for identifying the number of clusters consists in identifying a considerable distance leap (whenever possible), and defining the number of clusters formed in the clustering stage immediately before the great leap, since very high leaps may incorporate observations with characteristics that are not so homogeneous.

Furthermore, it is also important to mention that, if the distance leaps from a stage to another are small, due to the existence of variables with values that are too close to the observations, which can make it difficult to read the dendrogram, the researcher may use the squared Euclidean distance, so that the leaps can become clearer and better explained, making it easier to identify the clusters in the dendrogram, and providing better arguments for the decision making process.

Software such as SPSS shows dendrograms with rescaled distance measures, in order to facilitate the interpretation of the allocation of each observation and the visualization of the large distance leaps.

Fig. 11.15 illustrates how clusters can be established after the single-linkage method is elaborated.

Fig. 11.15
Fig. 11.15 Suggestion of clusters formed after the single-linkage method.

Next, we will develop the same example. However, now, let’s use the complete- and average-linkage methods, so that we can compare the order of the observations and the distance leaps.

11.2.2.1.2.2 Furthest-Neighbor or Complete-Linkage Method

Matrix D0, shown here, is obviously the same, and the smallest Euclidian distance, the one highlighted, is between observations Gabriela and Ovidio that become the first cluster. It is important to emphasize that the first cluster will always be the same, regardless of the linkage method used, since the first stage will always consider the smallest distance between two pairs of observations, which are still isolated.

Unlabelled Image

In the complete-linkage method, we must use Expression (11.24) to construct matrix D1, as follows:

d(GabrielaOvidio)Luiz Felipe=max{10.132;10.290}=10.290

si128_e

d(GabrielaOvidio)Patricia=max{8.420;6.580}=8.420

si129_e

d(GabrielaOvidio)Leonor=max{4.170;5.474}=5.474

si130_e

Matrix D1 can be seen and by analyzing it, we can see that observation Leonor will be incorporated into the cluster formed by Gabriela and Ovidio. Once again, the smallest value, among all the ones shown in matrix D1, is highlighted.

Unlabelled Image

As verified when using the single-linkage method, here, observations Luiz Felipe and Patricia also remain isolated at this stage. The differences between the methods start arising now. Therefore, we will construct matrix D2 using the following criteria:

d(GabrielaOvidioLeonor)Luiz Felipe=max{10.290;8.223}=10.290

si131_e

d(GabrielaOvidioLeonor)Patricia=max{8.420;6.045}=8.420

si132_e

Matrix D2 can be written as follows:

Unlabelled Image

In the third clustering stage, a new cluster is formed by the fusion of observations Patricia and Luiz Felipe, since the furthest-neighbor criterion adopted in the complete-linkage method makes the distance between these two observations become the smallest among all the ones calculated to construct matrix D2. Therefore, notice that at this stage differences related to the single-linkage method appear, in terms of the sorting and allocation of the observations to groups.

Hence, to construct matrix D3, we must take the following criterion into consideration:

d(GabrielaOvidioLeonor)(Luiz FelipePatricia)=max{10.290;8.420}=10.290

si133_e

Unlabelled Image

In the same way, in the fourth and last stage, all the observations are allocated to the same cluster, since there is the clustering between Gabriela-Ovidio-Leonor and Luiz Felipe-Patricia. Table 11.14 shows a summary of this agglomeration schedule, elaborated by using the complete-linkage method.

Table 11.14

Agglomeration Schedule Through the Complete-Linkage Method
StageClusterGrouped ObservationSmallest Euclidian Distance
1GabrielaOvidio3.713
2Gabriela-OvidioLeonor5.474
3Luiz FelipePatricia7.187
4Gabriela-Ovidio-LeonorLuiz Felipe-Patricia10.290

Table 11.14

This agglomeration schedule’s dendrogram can be seen in Fig. 11.16. We can initially see that the sorting of the observations is different from what was observed in the dendrogram seen in Fig. 11.12.

Fig. 11.16
Fig. 11.16 Dendrogram—Complete-linkage method.

Analogous to what was carried out in the previous method, we chose to draw two vertical lines (I and II) over the largest distance leap, as shown in Fig. 11.17.

Fig. 11.17
Fig. 11.17 Interpreting the dendrogram—Clusters and distance leaps.

Thus, if the researcher chooses to consider three clusters, the solution will be the same as the one achieved previously through the single-linkage method, one formed by Gabriela, Ovidio, and Leonor, another one by Luiz Felipe, and a third one by Patricia (line I in Fig. 11.17). However, if he chooses to define two clusters (line II), the solution will be different since, in this case, the second cluster will be formed by Luiz Felipe and Patricia, while in the previous case, it was formed only by Luiz Felipe, since observation Patricia was allocated to the first cluster.

Similar to what was done in the previous method, Fig. 11.18 illustrates how the clusters can be established after the complete-linkage method is carried out.

Fig. 11.18
Fig. 11.18 Suggestion of clusters formed after the complete-linkage method.

Defining the clustering method can be based on the application of the average-linkage method, in which two groups merge based on the average distance between all the pairs of observations that belong to these groups. Therefore, as we have already discussed, if the most suitable method is the single linkage because there are observations considerably far apart from one another, the sorting and allocation of the observations will be maintained by the average-linkage method. On the other hand, the outputs of this method will show consistency with the solution achieved through the complete-linkage method as regards the sorting and allocation of the observations, if they are very similar in the variables in study.

Thus, it is advisable for the researcher to apply the three linkage methods when elaborating a cluster analysis through hierarchical agglomeration schedules. Therefore, let’s move on to the average-linkage method.

11.2.2.1.2.3 Between-Groups or Average-Linkage Method

First of all, let’s show the Euclidian distance matrix between each pair of observations (matrix D0), once again, highlighting the smallest distance between them.

Unlabelled Image

By using Expression (11.25), we are able to calculate the terms of matrix D1, given that the first cluster Gabriela-Ovidio has already been formed. Thus, we have:

d(GabrielaOvidio)Luiz Felipe=10.132+10.2902=10.211

si134_e

d(GabrielaOvidio)Patricia=8.420+6.5802=7.500

si135_e

d(GabrielaOvidio)Leonor=4.170+5.4742=4.822

si136_e

Matrix D1 can be seen and, through it, we can see that observation Leonor is once again incorporated into the cluster formed by Gabriela and Ovidio. The smallest value among all the ones presented in matrix D1 has also been highlighted.

Unlabelled Image

In order to construct matrix D2, in which the distances between the cluster Gabriela-Ovidio-Leonor and the two remaining observations are calculated, we must perform the following calculations:

d(GabrielaOvidioLeonor)Luiz Felipe=10.132+10.290+8.2233=9.548

si137_e

d(GabrielaOvidioLeonor)Patrícia=8.420+6.580+6.0453=7.015

si138_e

Note that the distances used to calculate the dissimilarities to be inserted into matrix D2 are the original Euclidian distances between each pair of observations, that is, they come from matrix D0. Matrix D2 can be seen:

Unlabelled Image

As verified when the single-linkage method was elaborated, here, observation Patricia is also incorporated into the cluster already formed by Gabriela, Ovidio and Leonor, and observation Luiz Felipe remains isolated. Finally, matrix D3 can be constructed from the following calculation:

d(GabrielaOvidioLeonorPatricia)Luiz Felipe=10.132+10.290+8.223+7.1874=8.958

si139_e

Unlabelled Image

Once again, in the fourth and last stage, all the observations are in the same cluster. Table 11.15 and Fig. 11.19 present a summary of this agglomeration schedule and the corresponding dendrogram, respectively, resulting from this average-linkage method.

Table 11.15

Agglomeration Schedule Through the Average-Linkage Method
StageClusterGrouped ObservationSmallest Euclidian Distance
1GabrielaOvidio3.713
2Gabriela-OvidioLeonor4.822
3Gabriela-Ovidio-LeonorPatricia7.015
4Gabriela-Ovidio-Leonor-PatriciaLuiz Felipe8.958

Table 11.15

Fig. 11.19
Fig. 11.19 Dendrogram—Average-linkage method.

Despite having other distance values, we can see that Table 11.15 and Fig. 11.19 show the same sorting and the same allocation of observations in the clusters as those presented in Table 11.13 and in Fig. 11.12, respectively, obtained when the single-linkage method was elaborated.

Hence, we can state that the observations are significantly different from the variables studied, fact proven by the consistency of the answers obtained from the single- and average-linkage methods. If the observations were more similar, fact that has not been observed in the diagram seen in Fig. 11.11, the consistency of answers would occur between the complete- and average-linkage methods, as already discussed. Therefore, when possible, the initial construction of scatter plots may help researchers, even if in a preliminary way, choose the method to be adopted.

Hierarchical agglomeration schedules are very useful and offer us the possibility to analyze, in an exploratory way, the similarity between observations based on the behavior of certain variables. However, it is essential for researchers to understand that these methods are not conclusive by themselves and more than one answer may be obtained, depending on what is desired and on the data behavior.

Besides, it is necessary for researchers to be aware of how sensitive these methods are to the presence of outliers. The existence of a very discrepant observation may cause other observations, not so similar to one another, to be allocated to the same cluster because they are extremely different from the observation considered an outlier. Hence, it is advisable to apply the hierarchical agglomeration schedules with the linkage method chosen several times, and, in each application, to identify one or more observations considered outliers. This procedure will make the cluster analysis become more reliable, since more and more homogeneous clusters may be formed. Researchers are free to characterize the most discrepant observation as the one that ended up becoming isolated after the penultimate clustering stage, that is, if it happens before the total fusion. Nonetheless, many are the methods to define an outlier. Barnett and Lewis (1994), for instance, mention almost 1000 articles in the existing literature on outliers, and, for pedagogical purposes, in the Appendix of this chapter, we will discuss an efficient procedure in Stata for detecting outliers when a researcher is carrying out a multivariate data analysis.

It is also important to emphasize, as we have already discussed in this section, that different linkage methods, when elaborating hierarchical agglomeration schedules, must be applied to the same dataset, and the resulting dendrograms, compared. This procedure will help researchers in their decision-making processes with regard to choosing the ideal number of clusters, and also to sorting the observations and allocating each one of them to the different clusters formed. This will even allow researchers to make coherent decisions about the number of clusters that may be considered input in a possible nonhierarchical analysis.

Last but not least, it is worth mentioning that the agglomeration schedules presented in this section (Tables 11.13, 11.14, and 11.15) provide increasing values of the clustering measures because a dissimilarity measure was used (Euclidian distance) as a comparison criterion between the observations. If we had chosen Pearson’s correlation between the observations, a similarity measure also used for metric variables, as we discussed in Section 11.2.1.1, the values of the clustering measures in the agglomeration schedules would be decreasing. The latter is also true for cluster analyses in which similarity measures are used, as the ones studied in Section 11.2.1.2, to assess the behavior of observations based on binary variables.

In the following section we will develop the same example, in an algebraic way, using the nonhierarchical k-means agglomeration schedule.

11.2.2.2 Nonhierarchical K-Means Agglomeration Schedule

Among all the nonhierarchical agglomeration schedules, the k-means procedure is the most often used by researchers in several fields of knowledge. Given that the number of clusters is previously defined by the researcher, this procedure can be elaborated after the application of a hierarchical agglomeration schedule when we have no idea of the number of clusters that can be formed, and, in this situation, the output obtained from this procedure can serve as input for the nonhierarchical.

11.2.2.2.1 Notation

As the one developed in Section 11.2.2.1.1, we now present a logical sequence of steps, based on Johnson and Wichern (2007), in order to facilitate the understanding of the cluster analysis (k-means procedure):

  1. 1. We define the initial number of clusters and the respective centroids. The main objective is to divide the observations from the dataset into K clusters, such that those within each cluster are the closest to each other if compared to any other that belongs to a different cluster. For such, the observations need to be allocated arbitrarily to the K clusters, so that the respective centroids can be calculated.
  2. 2. We must choose a certain observation that is closer to a centroid and reallocate it to this cluster. At this moment, another cluster has just lost that observation, and, therefore, the centroids of the cluster that receives it and of the cluster that loses it must be recalculated.
  3. 3. We must continue repeating the previous step until it is no longer possible to reallocate any observation due to its close proximity to a centroid from another cluster.

Centroid coordinate ˉxsi140_e must be recalculated whenever including or excluding a certain observation p in the respective cluster, based on the following expressions:

ˉxnew=Nˉx+xpN+1,if observationpis inserted into the cluster under analysis

si141_e  (11.26)

ˉxnew=NˉxxpN1,if observationpis excluded from the cluster under analysis

si142_e  (11.27)

where N and ˉxsi140_e refer to the number of observations in the cluster and to its centroid coordinate before the reallocation of that observation, respectively. In addition, xp refers to the coordinate of observation p, which changed clusters.

For two variables (X1 and X2), Fig. 11.20 shows a hypothetical situation that represents the end of the k-means procedure, in which it is no longer possible to reallocate any observation because there are no more close proximities to centroids of other clusters.

Fig. 11.20
Fig. 11.20 Hypothetical situation that represents the end of the K-means procedure.

The matrix with distances between observations does not need to be defined at each step, different from hierarchical agglomeration schedules, which reduces the requirements in terms of technological capabilities, allowing nonhierarchical agglomeration schedules to be applied to considerably larger dataset than those traditionally studied through hierarchical schedules.

In addition, bear in mind that the variables must be standardized before elaborating the k-means procedure, and in the hierarchical agglomeration schedules too, if the respective values are not in the same unit of measure.

Finally, after concluding this procedure, it is important for researchers to analyze if the values of a certain metric variable differ between the groups defined, that is, if the variability between the clusters is significantly higher than the internal variability of each cluster. The F-test of the one-way analysis of variance, or one-way ANOVA, allows us to develop this analysis, and its null and alternative hypotheses can be defined as follows:

  • H0: the variable under analysis has the same mean in all the groups formed.
  • H1: the variable under analysis has a different mean in at least one of the groups in relation to the others.

Therefore, a single F-test can be applied for each variable, aiming to assess the existence of at least one difference among all the comparison possibilities, and, in this regard, the main advantage of applying it is the fact that adjustments in the discrepant dimensions of the groups do not need to be carried out to analyze several comparisons. On the other hand, rejecting the null hypothesis at a certain significance level, does not allow the researcher to know which group(s) is(are) statistically different from the others in relation to the variable being analyzed.

The F statistical expression, corresponding to this test, is given by the following expression:

F=variability between the groupsvariability within the groups=Kk=1Nk(ˉXkˉX)2K1ki(XkiˉXk)2nK

si144_e  (11.28)

where N is the number of observations in the k-th cluster, ˉXksi145_e is the mean of variable X in the same k-th cluster, ˉXsi106_e is the general average of variable X, and Xki is the value that variable X takes on for a certain observation i present in the k-th cluster. In addition, K represents the number of clusters to be compared, and n, the sample size.

By using the F statistic, researchers will be able to identify the variables whose means most differ between the groups, that is, those that most contribute to the formation of at least one of the K clusters (highest F statistic), as well as those that do not contribute to the formation of the suggested number of clusters, at a certain significance level.

In the following section, we will discuss a practical example that will be solved algebraically, and from which the concepts of the k-means procedure may be established.

11.2.2.2.2 A Practical Example of a Cluster Analysis With the Nonhierarchical K-Means Agglomeration Schedule

To solve the nonhierarchical k-means agglomeration schedule algebraically, let’s use the data from our own example, which can be found in Table 11.12 and are shown in Table 11.16.

Table 11.16

Example: Grades in Math, Physics, and Chemistry on the College Entrance Exams
Student
(Observation)
Grade in Mathematics
(X1i)
Grade in Physics
(X2i)
Grade in Chemistry
(X3i)
Gabriela3.72.79.1
Luiz Felipe7.88.01.5
Patricia8.91.02.7
Ovidio7.01.09.0
Leonor3.42.05.0

Table 11.16

Software packages such as SPSS use the Euclidian distance as the standard dissimilarity measure, reason why we will develop the algebraic procedures based on this measure. This criterion will even allow the results obtained to be compared to the ones found when elaborating the hierarchical agglomeration schedules in Section 11.2.2.1.2, as, in those situations, the Euclidian distance was also used. In the same way, it will not be necessary to standardize the variables through Z-scores, since all of them are in the same unit of measure (grades from 0 to 10). Otherwise, it is crucial for researchers to standardize the variables before elaborating the k-means procedure.

Using the logical sequence presented in Section 11.2.2.2.1, we will develop the k-means procedure with K = 3 clusters. This number of clusters may have come from a decision made by the researcher and based on a certain preliminary criterion, or it was chosen based on the outputs of the hierarchical agglomeration schedules. In our case, the decision was made based on the comparison of the dendrograms that had already been constructed, and by the similarity of the outputs obtained by the single- and average-linkage methods.

Thus, we need to arbitrarily allocate the observations to three clusters, so that the respective centroids can be calculated. Therefore, we can establish that observations Gabriela and Luiz Felipe form the first cluster, Patricia and Ovidio, the second, and Leonor, the third. Table 11.17 shows the arbitrary formation of these preliminary clusters, as well as the calculation of the respective centroid coordinates, which makes the initial step of the k-means procedure algorithm possible.

Table 11.17

Arbitrary Allocation of the Observations in K = 3 Clusters and Calculation of the Centroid Coordinates—Initial Step of the K-Means Procedure
Centroid Coordinates
ClusterVariable
Grade in MathematicsGrade in PhysicsGrade in Chemistry
Gabriela3.7+7.82=5.75si20_e2.7+8.02=5.35si21_e9.1+1.52=5.30si22_e
Luiz Felipe
Patricia8.9+7.02=7.95si23_e1.0+1.02=1.00si24_e2.7+9.02=5.85si25_e
Ovidio
Leonor3.402.005.00

Table 11.17

Based on these coordinates, we constructed the chart seen in Fig. 11.21, which shows the arbitrary allocation of each observation to its cluster and the respective centroids.

Fig. 11.21
Fig. 11.21 Arbitrary allocation of the observations in K = 3 clusters and respective centroids—Initial step of the K-means procedure.

Based on the second step of the logical sequence presented in Section 11.2.2.2.1, we must choose a certain observation and calculate the distance between it and all the cluster centroids, assuming that it is or it is not reallocated to each cluster. Selecting the first observation (Gabriela), for example, we can calculate the distances between it and the centroids of the clusters that have already been formed (Gabriela-Luiz Felipe, Patricia-Ovidio, and Leonor) and, after that, assume that it leaves its cluster (Gabriela-Luiz Felipe), and is inserted into one of the other two clusters, forming the cluster Gabriela-Patricia-Ovidio or Gabriela-Leonor. Thus, from Expressions (11.26) and (11.27), we must recalculate the new centroid coordinates, simulating that, in fact, the reallocation of Gabriela to one of the two clusters takes place, as shown in Table 11.18.

Table 11.18

Simulating the Reallocation of Gabriela and Calculating the New Centroid Coordinates
Centroid Coordinates
ClusterSimulationVariable
Grade in MathematicsGrade in PhysicsGrade in Chemistry
Luiz FelipeExcluding Gabriela2(5.75)3.7021=7.80si26_e2(5.35)2.7021=8.00si27_e2(5.30)9.1021=1.50si28_e
GabrielaIncluding Gabriela2(7.95)+3.702+1=6.53si29_e2(1.00)+2.702+1=1.57si30_e2(5.85)+9.102+1=6.93si31_e
Patricia
Ovidio
GabrielaIncluding Gabriela1(3.40)+3.701+1=3.55si32_e1(2.00)+2.701+1=2.35si33_e1(5.00)+9.101+1=7.05si34_e
Leonor

Table 11.18

Obs.: Note that the values calculated for the Luiz Felipe centroid coordinates are exactly the same as this observation’s original coordinates, as shown in Table 11.16.

Thus, from Tables 11.16, 11.17, and 11.18, we can calculate the following Euclidian distances:

  •  Assumption that Gabriela is not reallocated:

dGabriela(GabrielaLuiz Felipe)=(3.705.75)2+(2.705.35)2+(9.105.30)2=5.066

si147_e

dGabriela(PatriciaOvidio)=(3.707.95)2+(2.701.00)2+(9.105.85)2=5.614

si148_e

dGabrielaLeonor=(3.703.40)2+(2.702.00)2+(9.105.00)2=4.170

si149_e

  •  Assumption that Gabriela is reallocated:

dGabrielaLuiz Felipe=(3.707.80)2+(2.708.00)2+(9.101.50)2=10.132

si150_e

dGabriela(GabrielaPatriciaOvidio)=(3.706.53)2+(2.701.57)2+(9.106.93)2=3.743

si151_e

dGabriela(GabrielaLeonor)=(3.703.55)2+(2.702.35)2+(9.107.05)2=2.085

si152_e

Since Gabriela is the closest to the Gabriela-Leonor centroid (the shortest Euclidian distance), we must reallocate this observation to the cluster initially formed only by Leonor. So, the cluster in which observation Gabriela was at first (Gabriela-Luiz Felipe) has just lost it, and now Luiz Felipe has become an individual cluster. Therefore, the centroids of the cluster that receives it and the one that loses it must be recalculated. Table 11.19 shows the creation of the new clusters and the calculation of the respective centroid coordinates too.

Table 11.19

New Centroids With the Reallocation of Gabriela
Centroid Coordinates
ClusterVariable
Grade in MathematicsGrade in PhysicsGrade in Chemistry
Luiz Felipe7.808.001.50
Patricia7.951.005.85
Ovidio
Gabriela3.7+3.42=3.55si35_e2.7+2.02=2.35si36_e9.1+5.02=7.05si37_e
Leonor

Table 11.19

Based on these new coordinates, we can construct the chart shown in Fig. 11.22.

Fig. 11.22
Fig. 11.22 New clusters and respective centroids—Reallocation of Gabriela.

Once again, let’s repeat the previous step. At this moment, since observation Luiz Felipe is isolated, let’s simulate the reallocation of the third observation (Patricia). We must calculate the distances between it and the centroids of the clusters that have already been formed (Luiz Felipe, Patricia-Ovidio, and Gabriela-Leonor) and, afterwards, assume that it leaves its cluster (Patricia-Ovidio) and is inserted into one of the other two clusters, forming the cluster Luiz Felipe-Patricia or Gabriela-Patricia-Leonor. Also based on Expressions (11.26) and (11.27), we must recalculate the new centroid coordinates, simulating that, in fact, the reallocation of Patricia to one of these two clusters happens, as shown in Table 11.20.

Table 11.20

Simulation of Patricia’s Reallocation—Next Step of the K-Means Procedure Algorithm
Centroid Coordinates
ClusterSimulationVariable
Grade in MathematicsGrade in PhysicsGrade in Chemistry
Luiz FelipeIncluding Patricia1(7.80)+8.901+1=8.35si38_e1(8.00)+1.001+1=4.50si39_e1(1.50)+2.701+1=2.10si40_e
Patricia
OvidioExcluding Patricia2(7.95)8.9021=7.00si41_e2(1.00)1.0021=1.00si42_e2(5.85)2.7021=9.00si43_e
GabrielaIncluding Patricia2(3.55)+8.902+1=5.33si44_e2(2.35)+1.002+1=1.90si45_e2(7.05)+2.702+1=5.60si46_e
Patricia
Leonor

Table 11.20

Obs.: Note that the values calculated of the Ovidio centroid coordinates are exactly the same as this observation’s original coordinates, as shown in Table 11.16.

Similar to what was carried out when simulating Gabriela’s reallocation, based on Tables 11.16, 11.19, and 11.20, let’s calculate the Euclidian distances between Patricia and each one of the centroids:

  •  Assumption that Patricia is not reallocated:

dPatriciaLuiz Felipe=(8.907.80)2+(1.008.00)2+(2.701.50)2=7.187

si153_e

dPatricia(PatriciaOvidio)=(8.907.95)2+(1.001.00)2+(2.705.85)2=3.290

si154_e

dPatricia(GabrielaLeonor)=(8.903.55)2+(1.002.35)2+(2.707.05)2=7.026

si155_e

  •  Assumption that Patricia is reallocated:

dPatricia(Luiz FelipePatricia)=(8.908.35)2+(1.004.50)2+(2.702.10)2=3.593

si156_e

dPatriciaOvidio=(8.907.00)2+(1.001.00)2+(2.709.00)2=6.580

si157_e

dPatricia(GabrielaPatriciaLeonor)=(8.905.33)2+(1.001.90)2+(2.705.60)2=4.684

si158_e

Bearing in mind that the Euclidian distance between Patricia and the cluster Patricia-Ovidio is the shortest, we have to reallocate it to another cluster and, at this moment, let’s maintain the solution presented in Table 11.19 and in Fig. 11.22.

Next, we will develop the same procedure, however, simulating the reallocation of the fourth observation (Ovidio). Analogously, we must, therefore, calculate the distances between this observation and the centroids of the clusters that have already been formed (Luiz Felipe, Patricia-Ovidio, and Gabriela-Leonor) and, after that, assume that it leaves its cluster (Patricia-Ovidio) and is inserted into one of the other two clusters, forming the cluster Luiz Felipe-Ovidio or Gabriela-Ovidio-Leonor. Once again by using Expressions (11.26) and (11.27), we can recalculate the new centroid coordinates, simulating that, in fact, the reallocation of Ovidio to one of these two clusters takes place, as shown in Table 11.21.

Table 11.21

Simulating Ovidio’s Reallocation—New Step of the K-Means Procedure Algorithm
Centroid Coordinates
ClusterSimulationVariable
Grade in MathematicsGrade in PhysicsGrade in Chemistry
Luiz FelipeIncluding Ovidio1(7.80)+7.001+1=7.40si47_e1(8.00)+1.001+1=4.50si39_e1(1.50)+9.001+1=5.25si49_e
Ovidio
PatriciaExcluding Ovidio2(7.95)7.0021=8.90si50_e2(1.00)1.0021=1.00si42_e2(5.85)9.0021=2.70si52_e
GabrielaIncluding Ovidio2(3.55)+7.002+1=4.70si53_e2(2.35)+1.002+1=1.90si45_e2(7.05)+9.002+1=7.70si55_e
Ovidio
Leonor

Table 11.21

Obs.: Note that the values calculated of the Patricia centroid coordinates are exactly the same as this observation’s original coordinates, as shown in Table 11.16.

Next, we can see the calculations of the Euclidian distances between Ovidio and each one of the centroids, defined from Tables 11.16, 11.19, and 11.21:

  •  Assumption that Ovidio is not reallocated:

dOvidioLuiz Felipe=(7.007.80)2+(1.008.00)2+(9.001.50)2=10.290

si159_e

dOvidio(PatriciaOvidio)=(7.007.95)2+(1.001.00)2+(9.005.85)2=3.290

si160_e

dOvidio(GabrielaLeonor)=(7.003.55)2+(1.002.35)2+(9.007.05)2=4.187

si161_e

  •  Assumption that Ovidio is reallocated:

dOvidio(Luiz FelipeOvidio)=(7.007.40)2+(1.004.50)2+(9.005.25)2=5.145

si162_e

dOvidioPatricia=(7.008.90)2+(1.001.00)2+(9.002.70)2=6.580

si163_e

dOvidio(GabrielaOvidioLeonor)=(7.004.70)2+(1.001.90)2+(9.007.70)2=2.791

si164_e

In this case, since observation Ovidio is the closest to the centroid of Gabriela-Ovidio-Leonor (the shortest Euclidian distance), we must reallocate this observation to the cluster formed originally by Gabriela and Leonor. Therefore, observation Patricia becomes an individual cluster. Table 11.22 shows the centroid coordinates of clusters Luiz Felipe, Patricia, and Gabriela-Ovidio-Leonor.

Table 11.22

New Centroids With Ovidio’s Reallocation
Centroid Coordinates
ClusterVariable
Grade in MathematicsGrade in PhysicsGrade in Chemistry
Luiz Felipe7.808.001.50
Patricia8.901.002.70
Gabriela4.701.907.70
Ovidio
Leonor

Table 11.22

We will not carry out the procedure proposed for the fifth observation (Leonor), since it had already fused with observation Gabriela in the first step of the algorithm. We can consider that the k-means procedure is concluded, since it is no longer possible to reallocate any observation due to closer proximity to another cluster’s centroid. Fig. 11.23 shows the allocation of each observation to its cluster and their respective centroids. Note that the solution achieved is equal to the one reached through the single- (Fig. 11.15) and average-linkage methods, when we elaborated the hierarchical agglomeration schedules.

Fig. 11.23
Fig. 11.23 Solution of the K-means procedure.

As we have already discussed, we can see that the matrix with the distances between the observations does not need to be defined at each step of the k-means procedure algorithm, different from the hierarchical agglomeration schedules, which reduces the requirements in terms of technological capabilities, allowing nonhierarchical agglomeration schedules to be applied to dataset significantly larger than the ones traditionally studied through hierarchical schedules.

Table 11.23 shows the Euclidian distances between each observation of the original dataset and the centroids of each one of the clusters formed.

Table 11.23

Euclidian Distances Between Observations and Cluster Centroids
Cluster
Student
(Observation)
Luiz FelipePatriciaGabriela
Ovidio
Leonor
Gabriela10.1328.4201.897
Luiz Felipe0.0007.1879.234
Patricia7.1870.0006.592
Ovidio10.2906.5802.791
Leonor8.2236.0452.998

Table 11.23

We would like to emphasize that this algorithm can be elaborated with another preliminary allocation of the observations to the clusters besides the one chosen in this example. Reapplying the k-means procedure with several arbitrary choices, given K clusters, allows the researcher to assess how stable the clustering procedure is, and to underpin the allocation of the observations to the groups in a consistent way.

After concluding this procedure, it is essential to check, through the F-test of one-way ANOVA, if the values of each one of the three variables considered in the analysis are statistically different between the three clusters. To make the calculation of the F statistics that correspond to this test easier, we constructed Tables 11.24, 11.25, and 11.26, which show the means per cluster and the general mean of the variables mathematics, physics, and chemistry, respectively.

Table 11.24

Means per Cluster and General Mean of the Variable mathematics
Cluster 1Cluster 2Cluster 3
XLuiz Felipe=7.80si56_eXPatricia=8.90si57_eXGabriela=3.70si58_e
XOvidio=7.00si59_e
XLeonor=3.40si60_e
ˉX1=7.80si61_eˉX2=8.90si62_eˉX3=4.70si63_e
ˉX=6.16si64_e

Table 11.24

Table 11.25

Means per Cluster and General Mean of the Variable physics
Cluster 1Cluster 2Cluster 3
XLuiz Felipe=8.00si65_eXPatricia=1.00si66_eXGabriela=2.70si67_e
XOvidio=1.00si68_e
XLeonor=2.00si69_e
ˉX1=8.00si70_eˉX2=1.00si71_eˉX3=1.90si72_e
ˉX=2.94si73_e

Table 11.25

Table 11.26

Means per Cluster and General Mean of the Variable chemistry
Cluster 1Cluster 2Cluster 3
XLuiz Felipe=1.50si74_eXPatricia=2.70si75_eXGabriela=9.10si76_e
XOvidio=9.00si77_e
XLeonor=5.00si78_e
ˉX1=1.50si79_eˉX2=2.70si80_eˉX3=7.70si81_e
ˉX=5.46si82_e

Table 11.26

So, based on the values presented in these tables and by using Expression (11.28), we are able to calculate the variation between the groups and within them for each one of the variables, as well as the respective F statistics. Tables 11.27, 11.28, and 11.29 show these calculations.

Table 11.27

Variation and F Statistic for the Variable mathematics
Variability between the groups(7.806.16)2+(8.906.16)2+3(4.706.16)231=8.296si83_e
Variability within the groups(3.704.70)2+(7.004.70)2+(3.404.70)253=3.990si84_e
F8.2963.990=2.079si85_e

Note: The calculation of the variability within the groups only took cluster 3 into consideration, since the others show variability equal to 0, because they are formed by a single observation.

Table 11.28

Variation and F Statistic for the Variable physics
Variability between the groups(8.002.94)2+(1.002.94)2+3(1.902.94)231=16.306si86_e
Variability within the groups(2.701.90)2+(1.001.90)2+(2.001.90)253=0.730si87_e
F16.3060.730=22.337si88_e

Note: The same as the previous table.

Table 11.29

Variation and F Statistic for the Variable chemistry
Variability between the groups(1.505.46)2+(2.705.46)2+3(7.705.46)231=19.176si89_e
Variability within the groups(9.107.70)2+(9.007.70)2+(5.007.70)253=5.470si90_e
F19.1765.470=3.506si91_e

Note: The same as Table 11.27.

Now, let’s analyze the rejection or not of the null hypothesis of the F-tests for each one of the variables. Since there are two degrees of freedom for the variability between the groups (K – 1 = 2) and two degrees of freedom for the variability within the groups (n – K = 2), by using Table A in the Appendix, we have Fc = 19.00 (critical F at a significance level of 0.05). Therefore, only for the variable physics can we reject the null hypothesis that all the groups formed have the same mean, since F calculated Fcal = 22.337 > Fc = F2,2,5% = 19.00, So, for this variable, there is at least one group that has a mean that is statistically different from the others. For the variables mathematics and chemistry, however, we cannot reject the test’s null hypothesis at a significance level of 0.05.

Software such as SPSS and Stata do not offer the Fc for the defined degrees of freedom and a certain significance level. However, they offer the Fcal significance level for these degrees of freedom. Thus, instead of analyzing if Fcal > Fc, we must verify if the Fcal significance level is less than 0.05 (5%). Therefore:

If Sig. F (or Prob. F) < 0.05, there is at least one difference between the groups for the variable under analysis.

The Fcal significance level can be obtained in Excel by using the command FormulasInsert FunctionFDIST, which will open a dialog box as the one shown in Fig. 11.24.

Fig. 11.24
Fig. 11.24 Obtaining the F significance level (command Insert Function).

As we can see in this figure, sig. F for the variable physics is less than 0.05 (sig. F = 0.043), that is, there is at least one difference between the groups for this variable at a significance level of 0.05. An inquisitive researcher will be able to carry out the same procedure for the variables mathematics and chemistry. In short, Table 11.30 presents the results of the one-way ANOVA, with the variation of each variable, the F statistics, and the respective significance levels.

Table 11.30

One-way Analysis of Variance (ANOVA)
VariableVariability Between the GroupsVariability Within the GroupsFSig. F
mathematics8.2963.9902.0790.325
physics16.3060.73022.3370.043
chemistry19.1765.4703.5060.222

Table 11.30

The one-way ANOVA table still allows the researcher to identify the variables that most contribute to the formation of at least one of the clusters, because they have a mean that is statistically different from at least one of the groups in relation to the others, since they will have greater F statistic values. It is important to mention that F statistic values are very sensitive to the sample size, and, in this case, the variables mathematics and chemistry ended up not having statistically different means among the three groups, mainly because the sample is small (only five observations).

We would like to emphasize that this one-way ANOVA can also be carried out soon after the application of a certain hierarchical agglomeration schedule, since it only depends on the classification of the observations within groups. The researcher must be careful about only one thing, when comparing the results obtained by a hierarchical schedule to the ones obtained by a nonhierarchical schedule, to use the same distance measure in both situations. Different allocations of the observations to the same number of clusters may happen if different distance measures are used in a hierarchical schedule and in a nonhierarchical schedule. Therefore, different values of the F statistics in both situations can be calculated.

In general, in case there are one or more variables that do not contribute to the formation of the suggested number of clusters, we recommend that the procedure be reapplied without it (or them). In these situations, the number of clusters may change and, if the researcher feels the need to underpin the initial input regarding the number of K clusters, he may even use a hierarchical agglomeration schedule without those variables before reapplying the k-means procedure, which will make the analysis cyclical.

Moreover, the existence of outliers may generate considerably disperse clusters, and treating the dataset in order to identify extremely discrepant observations becomes an advisable procedure, before elaborating nonhierarchical agglomeration schedules. In the Appendix of this chapter, an important procedure in Stata for detecting multivariate outliers will be presented.

As with hierarchical agglomeration schedules, the nonhierarchical k-means schedule cannot be used as an isolated technique to make a conclusive decision about the clustering of observations. The data behavior, sample size, and criteria adopted by the researcher may be extremely sensitive to the allocation of observations and the formation of clusters. The combination of the outputs found with the ones coming from other techniques can more powerfully underpin the choices made by the researcher, and provide higher transparency in the decision-making process.

At the end of the cluster analysis, since the clusters formed can be represented in the dataset by a new qualitative variable with terms connected to each observation (cluster 1, cluster 2, ..., cluster K), other exploratory multivariate techniques can be elaborated from it, as, for example, a correspondence analysis, so that, depending on the researcher’s objectives, we can study a possible association between the clusters and the categories of other qualitative variables.

This new qualitative variable, which represents the allocation of each observation, may also be used as an explanatory variable of a certain phenomenon in confirmatory multivariate models as, for example, multiple regression models, as long as it is transformed into dummy variables that represent the categories (clusters) of this new variable generated in the cluster analysis, as we will study in Chapter 13. On the other hand, such a procedure only makes sense when we intend to propose a diagnostic regarding the behavior of the dependent variable, without aiming at having forecasts. Since a new observation does not have its place in a certain cluster, obtaining its allocation is only possible when we include such observation into a new cluster analysis, in order to obtain a new qualitative variable and, consequently, new dummies.

In addition, this new qualitative variable can also be considered dependent on a multinomial logistic regression model, allowing the researcher to evaluate the probabilities each observation has to belong to each one of the clusters formed, due to the behavior of other explanatory variables not initially considered in the cluster analysis. We would also like to highlight that this procedure depends on the research objectives and construct established, and has a diagnostic nature as regards the behavior of the variables in the sample for the existing observations, without a predictive purpose.

Finally, if the clusters formed present substantiality in relation to the number of observations allocated, by using other variables, we may even apply specific confirmatory techniques for each cluster identified, so that, possibly, better adjusted models can be generated.

Next, the same dataset will be used to run cluster analyses in SPSS and Stata. In Section 11.3, we will discuss the procedures for elaborating the techniques studied in SPSS and their results too. In Section 11.4, we will study the commands to perform the procedures in Stata, with the respective outputs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset