2
Quantification and Decision-making

Chapter 1 highlighted the links between quantification and HR decision-making. Thus, quantification is used not only to inform and illuminate decision-making ex ante, but also to justify ex post decisions taken. However, this dyad of information/justification is suspended in what Desrosières (2008a) calls the “myth of objectivity”, i.e. the idea that quantification is a neutral reflection of reality. This myth has its origin in the positivist stance of the natural sciences. Although it has been strongly challenged by many studies, both in epistemology and sociology, the myth of objective quantification is still valid. It is all the more important in HR because this function needs arguments to support decisions that are often crucial for employees (section 2.1).

More recently, as has been seen, the emergence of Big Data and algorithms has brought a new dimension to HR, based on the notion of personalization, which refers to decision-making linked to individuals. This notion creates a change both for quantification and statistical science, historically positioned as a science of large numbers far from individuals, and for the HR function, historically positioned as a function managing collectives of individuals and not individualities (section 2.2). Finally, the rise of predictive models is further reformulating the links between quantification and decision-making, focusing on decisions related to the future. Once again, this creates a break with the historical positioning of statistics and the HR function. In addition, it encourages the notion of performativity to be discussed and the effect of quantification on individuals’ behavior to be questioned, regardless of the cases in which it explicitly predicts such behavior (section 2.3).

2.1. In search of objectivity

Using quantification to inform but also to justify decision-making means attributing characteristics and qualities to it that are difficult to find elsewhere, which can be described as a “data imaginary” (Beer 2019). These qualities can be summed up in fact in a key notion, with many ramifications: objectivity (Bruno 2015; Beer 2019). The myth of objective quantification is not new, and is deeply rooted in the history of our Western societies (Gould 1997; Crosby 2003; Supiot 2015). Yet much research has debunked this myth, for example by showing to what extent quantification operations respond to socially or politically constructed choices, but also by highlighting the biases and perverse effects of quantification. It therefore seems legitimate to ask what explains the persistence of this myth, particularly within the HR function, or even in HR research, which often attributes this characteristic of objectivity to quantification. In fact, the HR function must regularly make decisions that can be crucial for individuals: recruitment, promotion, salary increases, etc. Having an apparently solid argument to justify these decisions probably makes them more acceptable and less questionable by the social body.

2.1.1. The myth of objective quantification

The myth of objective quantification dates back to the Middle Ages in Western civilization (Crosby 2003). To be able to claim to be objective, the quantification assumed by this myth has several characteristics: neutrality, precision, timeliness, coherence and transparency.

2.1.1.1. The origins of the myth

Crosby (2003) studied the origins of the supremacy of quantification in Western society. The use of quantification goes back to ancient times. Thus, many philosophers of Greek antiquity were interested in these questions, and Plato considered commensurate operations as essential to making better decisions and making human values less vulnerable to passions and fate (Espeland and Stevens1998). It was not until the 13th Century, however, that quantification took on the importance it has today. Crosby even dates this turning point very precisely, which he believes took place between 1275 and 1325. During these 50 years, several major evolutions based on quantification have occurred: the mechanical clock, geographical maps, double-entry accounting. This rise in quantification, made possible in part by scientific developments during the same period, extended to the arts, with painting in perspective and measured music (Box 2.1).

Looking at more recent periods, Supiot (2015) identifies the major functions that have gradually been conferred on quantification in Western society, particularly in the field of law and government: accountability, administration, judgment and legislation. Thus, accounting is in line with the objective to be accountable and has several characteristics. First of all, it must be a true and fair view of reality. It must, then, represent a probative value (the numbers entered in the accounts becoming evidence). Finally, it uses money as a measurement standard, making objects or things of very different orders (a material object, a service rendered, etc.) commensurable, i.e. comparable. The administration of an entity (State, organization, community, etc.) requires knowledge of its resources. Here, quantification is valuable in defining, cataloguing and measuring these resources. It even enables going further by defining regularities, or recurring trends, which is just as valuable for an administrator. Judgment or decision-making in situations of uncertainty can also be based on quantification, and more specifically on probabilistic calculation. Being able to measure the probability of an event occurring reduces uncertainty.

Probabilistic calculation has proved valuable in the production of some legislation. Thus, as early as the 18th Century in France, philosophers were in heated debate about the question of whether preventive inoculation of smallpox should be made mandatory. Indeed, inoculation made it possible to reduce the disease as a whole, but could be fatal for some individuals. It was then on the basis of a probabilistic calculation proposed by Bernoulli (probability of dying in case of inoculation versus general gain in life expectancy for the whole population) that philosophers such as Voltaire or d'Alembert were even opposed to each other’s work. Subsequently, the field of public health has seen similar debates reappear, particularly in the 19th Century, between doctors valuing a method of standardizing care based on medical statistics, and doctors valuing a more individual approach, consisting of giving great importance to the exchange with the patient.

Gardey (2008) looked at an even more recent period, and studied the evolution and diffusion of calculating machines between 1800 and 1940. She shows that the degree of complexity of the calculations carried out by machines has evolved from arithmetic calculation–dedicated to bookkeeping–to actuarial calculation–integrating probabilities for example but the purposes of these machines have remained relatively stable over time: to produce a reliable result quickly.

In these various examples, quantification is perceived as a way of getting closer to reality but also a way of distancing it. Thus, double-entry bookkeeping and perspective painting make it possible to build a more accurate picture of reality on paper than older, less quantified methods. On the other hand, the control of time or space allows the human being to no longer merge with reality, to depend less on it, but to bend it to social norms (equal hours, longitudes and latitudes, bookkeeping, judgment in situations of uncertainty, actuarial calculation). This is where the myth of “objective” quantification takes on its full importance.

Desrosières’ work is valuable in understanding this. He emphasizes that the “reality status” of statistical objects and measures is crucial (Desrosières 2008a). In other words, statistics are mobilized to the extent that a discourse and belief can be maintained that they represent a reflection of reality, or that they must be as close as possible to it. These two arguments are not interchangeable. Indeed, the notion of “reflection” seems to imply a difference in nature between reality and the statistics that measure it, while the notion of “rapprochement” seems to imply a similarity between reality and statistics. However, in both cases, the statistics are estimated and valued according to their relationship to reality. Desrosières then identifies three different discourses that value the links between statistics and reality, which he describes as “realistic” (Box 2.2).

This “realistic” stance therefore leads to an attempt to increase the reliability of statistics and measurements as much as possible: statistics must come as close as possible to the object they measure, which refers to the notion of objectivity. Porter then evokes a “language of objectivity” (Porter 1996, p. 229) mobilized to strengthen confidence in quantification, and based in part on the idea of a “mechanical objectivity” (Ibid. p. 213) of quantification.

2.1.1.2. What are the characteristics of “objective quantification”?

This “mechanical objectivity” presupposes several characteristics, which can be identified, for example, in the work of Porter or Desrosières.

Porter (1996) emphasizes the link between objectivity and neutrality or impersonality. Objectivity implies reducing as much as possible the intervention of the observer, the evaluator, ultimately of the human being. It is then defined as knowledge that depends little or not at all on the individual who produces it. Porter gives the example of mental tests and their evolution, which sought to limit the evaluator’s intervention as much as possible.

Desrosières (2008b) specifies the conventions and quality criteria that underlie quantification and, among other things, the myth of objective quantification. Thus, the precision criterion refers to the desire to reduce measurement and sampling errors and biases. Statistical information must be correct at a given moment t, and a change in reality must result in the data being updated. The coherence criterion is a central issue in maintaining the myth of objective quantification, in a context where several sources can provide the same data. Thus, a (not so rare) situation where two institutions provide different measurements, of the unemployment rate for example, would result in a questioning of the quality of the measurement and therefore of the link between the measurement and reality. The myth of objective quantification is also based on a criterion of clarity and transparency (Espeland and Stevens 2008). Users of statistics must be able to have access to the way in which they have been produced, their exact definition, any methodological choices made, etc.

More generally, the link between the concept of transparency and quantification has been the subject of extensive literature, both on the transparency of statistical tools and on the fact that quantification can be considered as providing transparency. Hansen and Flyverbom (2015), for example, study two quantification tools seen as providing transparency: rankings and algorithms. They show how rankings are mobilized by public policies as arguments of transparency (e.g. rankings relating to anti-corruption or press freedom). For their part, algorithms, although sometimes criticized for their lack of transparency, are in some cases perceived and presented as affording unprecedented access to reality, because of the mobilization of a greater quantity of data. Algorithms are therefore increasingly used by governments in the fight against fraud or crime. In any case, both rankings and algorithms are presented as conveying a perception of transparency. Hansen and Flyverbom, on the contrary, stress the fact that both tools create mediation between the measured object and the subject.

A last characteristic seems necessary to guarantee this objectivity: the ability to report or accountability. This characteristic is often linked to the notion of transparency (Espeland and Stevens 2008). To report on reality is to make things visible, in other words, to introduce a form of transparency. This link can be even further developed by calling upon Foucault’s work on transparency (Foucault 1975). Indeed, Foucault uses the example of the panoptic prison system, where a guard can observe all prisoners through transparent walls. In this system, the transparency of the walls allows the guard to see all the actions of the prisoners and thus to report on them. As Espeland and Stevens (2008, p. 432) point out, “an ethics of quantification should recognize that we live at a time in which democracy, merit, participation, accountability and even ‘fairness’ are presumed to be best disclosed and adjudicated through numbers”.

The objectivity of quantification thus seems to constitute an essential argument for its mobilization in certain functions and discourses. This objectivity presupposes the respect of certain technical or methodological criteria, such as impersonality, precision, timeliness, coherence and transparency. In practice, this translates into a form of trust in figures, seen as providing the necessary objectivity, as long as they meet these criteria. However, at the same time, many studies question this ideal of objectivity, even when these criteria are respected.

2.1.2. Limited objectivity

The objectivity of quantification may be threatened on several levels at least. First of all, quantification implies, “putting reality into statistics”, an investment of form (Thévenot 1989) that is always based on choices and conventions, which calls into question the idea of a totally neutral statistic. Second, quantification can give rise to many biases; the example of the quantified evaluation of work or individuals can illustrate this. Finally, quantifying reality can also, in some cases, have an effect on it, which calls into question the discourse of metrological realism.

2.1.2.1. Quantification conventions and the statisticalization of reality

Quantifying reality implies carrying out operations to “put into statistics” the world and things. The sociology of quantification has focused in particular on these operations, which constitute “quantification conventions”, in the sense that they are socially constructed (Diaz-Bone 2016) and provide interpretative frameworks for the various actors (Diaz-Bone and Thévenot 2010). The particularity of quantification conventions lies in the fact that they are based on scientific arguments and techniques, among other things, which reinforces the illusion of their objectivity (Salais 2016). Nevertheless, the sociology of quantification shows that these statisticalization operations, although necessary, may not be so “objective”, at least if the criteria mentioned are used. Thus, they involve human beings, which calls into question the criterion of impersonality: Gould (1997) clearly shows the role of individual careers and prejudices in the development, choice, mobilization or interpretation of quantified measurements. These operations can also mask a certain imprecision, since quantification always amounts to a form of reduction of the real world (Gould 1997). Finally, they are not always very transparent. Indeed, these statisticalization operations are often taken for granted and therefore negligible when it comes to using the data they produce. In other words, statisticians much more often raise the question of the neutrality and rigor of the methods they use than the question of the quality of the data they use. As a result, few users or recipients of the data question their initial quality, which calls into question the ideal of clarity and transparency.

Taking an interest in these statisticalization processes can be very enriching, by showing that they are not neutral, mechanical processes from which human biases are absent. Several examples illustrate these points, two in the health field and the others in the HR field.

Juven (2016) studies the implementation of activity-based pricing in hospitals. This pricing system requires the ability to calculate the costs of hospital stays and medical procedures. First, it was necessary to quantify the costs of each medical procedure. However, these costs may vary according to the type of patient concerned (e.g. operating on an appendicitis patient does not have the same consequences and therefore the same cost if it is a child, an adult, an elderly person or a person with another condition, etc.). This has led to the creation of supposedly homogeneous “patient groups”, for which the costs of operations will therefore be identical. Once these patient groups were created, the cost of each medical procedure for each patient group could be quantified. However, these two operations (creation of patient groups and costing of procedures) correspond to choices, partly managerial, but also political and social. For example, Juven gives the example of patient associations that have worked to ensure that the disease and the patients they represent are better taken into account (Box 2.3). Moreover, even with this degree of precision, many cases of individual patients are in fact difficult to categorize (due to the multiplicity of conditions and procedures performed, for example), which makes these choices partly dependent on hospital administrative staff.

Still in the health field, Hacking (2005) is interested in obesity. Obesity is measured on the basis of the body mass index, mobilized by Quetelet in the 19th Century to define a model of human growth, then by doctors in the 20th century to define dangerous slimming thresholds. It was only at the end of the 20th Century that obesity was defined as a situation where this index exceeded 30, particularly because studies showed that life expectancy decreased from this stage onwards. However, this definition of obesity is quickly challenged by the definition of overweight (index above 25). Today, many publications use the threshold of 25, not 30. This example and the variation of thresholds clearly illustrate the importance of quantification choices and conventions.

Similarly, in the HR field, the statisticalization of reality is based just as much on certain choices and conventions. The example of measuring employees’ commitment has been mentioned previously, which gives rise to a very wide variety of scales. This variety reflects the difficulty in agreeing on the definition of commitment. However, phenomena that seem easier to measure can lead to equally large variations. Thus, the measurement of absenteeism can give rise to a wide variety of indicators: total number of days of absence, number of absences, average number of days of absence per absence, number of days of absence counting only working days or not. In addition to this variety of indicators, there is also a variety of definitions of absence. Should all types of absence be included in “days of absence”? Or only absences due to illness? What about maternity or paternity leave? These choices are of course not insignificant at all. Taking maternity leave into account, for example, will lead to a mechanical increase in the measurement of absenteeism among women. Whether imposed by public authorities or decided internally following negotiations with social partners, these statistical choices clearly illustrate the fact that the same phenomenon can in fact give rise to a wide variety of measures. This calls into question the idea of a quantification that reflects the real situation in an impersonal and neutral way.

In Chapter 1, there was a focus on the example of job classification to illustrate the importance of quantifying HR work. However, the methods used to classify jobs, although presented as objective and rigorous, may also be based on conventions, leading to biases that call into question this ideal of objectivity (Box 2.4).

These statisticalization operations, which are rarely questioned by data users, are therefore social or political choices. The variety of possible choices calls into question the idea of an univocal relationship between quantification and reality, and therefore the idea that quantification is only a neutral and objective reflection of reality.

2.1.2.2. The biases of quantified evaluation

The example of quantified evaluation also allows a long list of possible biases in the quantification of individuals to be specified. As seen in Chapter 1, quantified evaluation can be done in several ways: by mobilizing a variable external to the non-declaratory organization (sales volume, for example), by mobilizing a declarative external variable (such as customer satisfaction) and by mobilizing a declarative internal variable (evaluation by the manager or colleagues, for example). However, in all cases, biases are possible.

An apparently neutral criterion such as sales volume may in fact contribute to a form of indirect discrimination. Indirect discrimination refers to a situation where an apparently neutral criterion disadvantages a population on prohibited criteria. For example, sales volumes are potentially higher on average on Saturdays, evenings or end-of-year holidays, regardless of the qualities of the salespeople. However, the populations that are available in these particularly high “selling” niches probably have certain sociodemographic characteristics: people without family responsibilities, among whom the youngest people are overrepresented, for example.

Direct discrimination refers to making decisions based directly on prohibited criteria (e.g. giving a lower rating to a woman because she is a woman). This type of discrimination can occur when an individual is marked by another, such as a client or manager (Castilla 2008). Indeed, this type of situation gives rise to a large number of possible biases (Box 2.5).

Quantified evaluation is also biased due to an unavoidable gap between prescribed work and actual work (Dejours 2003; Dejours 2006; Vidaillet 2013; Clot 2015), and a simplification that the management system makes when reporting on work (Hubault 2008; Le Bianic and Rot 2013). Thus, the choice of criteria and methods of quantified evaluation is based on a representation of work that does not always correspond to the daily work actually done. As a result, the evaluation, which often takes the form of a “quantified abstraction” (Boussard 2009), fails to report on all the efforts made by the worker, and therefore on all the performance and skills that they implement (Dejours 2003). Belorgey (2013a) gives the example of the use of the patient waiting-time indicator in emergency services. This indicator is intended to report on the quality of organization of services and institutions. However, despite its relative complexity and the fact that it takes into account dimensions related to the profile of patients or the care required, it only reflects part of the caregivers’ work, since it does not include information on the quality of care, the potential return of patients who have relapsed, for example, the possible complexity of making contact with certain patients (language problems for example), and the social characteristics of patients.

This last example highlights the fact that it is impossible for a quantified evaluation tool to take everything into account, or even to report on everything, which refers to the notion of accountability mentioned in the previous section. More specifically, it seems that quantified evaluation does indeed make it possible to account for some of the work or reality (possibly with biases, as just illustrated), but that, in doing so, it masks the rest, which it fails to account for. Recently, the discourse on the notion of algorithms and big data has suggested the possibility of accounting for everything, or almost everything, or at least more objects and dimensions than before, because of the emergence of new data and new methods to process them (Hansen and Flyverbom 2015). However, these discourses neglect the fact that algorithms and data themselves constitute mediations between the object, and the subject who measures it. Besides, these mediations sometimes constitute “black boxes” that are difficult to read, and ultimately not very transparent (Christin 2017).

The myth of objective quantification therefore does not stand up to the examination of multiple potential biases. Moreover, quantification contributes to a reduction in the complexity of reality (Berry 1983; Salais 2004), which is useful to reflect, in a standardized way, one or some dimensions of this reality, but which masks others. However, the choices made to reduce this complexity (which imply, for example, focusing on certain dimensions to the detriment of others) are not neutral, and may come from such things as power games or ideological debates.

2.1.3. Objectivity, a central issue in HR

Despite the relative ease with which the objectivity of quantification can be questioned, the discourse or myth of objective quantification is spreading and remains significant, particularly in the HR field. This can be partly explained by the fact that the notion of objectivity is a central HR issue. Indeed, many decisions taken in this field can have a significant influence on the professional and personal future of individuals: recruitment, remuneration, promotion, etc. Therefore, being able to justify these decisions is a crucial issue. In this context, guaranteeing a certain objectivity, or an illusion of objectivity, seems necessary to maintain social order and collective cohesion. Managerial discourses idealizing quantification and built on rhetoric oscillating between rationalization and normativity (Barley and Kunda 1992) then support this illusion of objectivity.

2.1.3.1. The myth of objective quantification in the HR field

In the HR field, the myth of objective quantification remains as prevalent as in other fields. It is reflected in a form of trust in figures, indicators and metrics (Box 2.6): the production and publication of figures regularly appears as a guarantee of transparency, which reflects one of the objectivity criteria mentioned above.

The myth of quantification generating transparency is found in the discourse of some providers offering solutions for comparing companies. For example, the Glassdoor website offers employees and former employees the opportunity to assess their working environment and provide information on pay and working conditions. This information, in the form of anonymous comments, but also and above all quantitative data, can then be used by job seekers to find out if the company could suit them, and by the company to identify the points of dissatisfaction of its employees. However, Glassdoor bases a large part of its commercial discourse on the notion of transparency. Its name is of course based on this idea; one of its presentations begins as follows: “Glassdoor, founded in 2007, is the world’s most transparent jobs community that is changing the way people find jobs and companies recruit top talent”. This illustrates the idea that data and quantification are vectors of transparency.

However, maintaining the myth of the objectivity of quantification can be explained in part by the need for the HR function and management to be able to justify the objectivity of the decisions taken. Thus, figures, whether actual measures or projections, are regularly used in the HR field to justify decisions taken at an individual or collective level, as illustrated by Noël and Wannenmacher (2012) on restructuring cases.

2.1.3.2. The importance of (the illusion of) objectivity in the world of work

The notion of organizational justice provides a partial understanding of the importance of the perceived objectivity of decisions made in the professional field.

This notion refers to the perceived fairness within the work environment (Schminke et al. 2015). The work from this field highlights the fact that perceived justice – or on the contrary perceived injustice – can have considerable effects on employees’ behavior, in terms of loyalty to the company, performance and commitment (Colquitt et al., 2013)1. However, perceived fairness can be conceptually broken down into four dimensions (Ambrose and Schminke 2009):

  • – perceived procedural justice refers to the way decisions are made (Cropanzano and Ambrose 2001): criteria, actors in decision-making, general rules related to decision-making, etc. For example, in the case of a pay rise decision, procedural justice could refer to the question of the criteria for raises, to their greater or lesser transparency or explanation, or to the people who decide on these raises;
  • – perceived distributive justice corresponds to the perceived justice of the results of the decision. Thus, in the same example, distributive justice would refer to the following question: is the increase I received fair given the efforts I have made and what my colleagues have received?
  • – perceived interactive justice emphasizes the interpersonal dimension (Jepsen and Rodwell 2012). Thus, the impression of being treated with respect, courtesy, and of being treated as well as the other members of the team is part of this dimension;
  • – perceived informational justice (sometimes included in interactional justice) underscores the importance of interpersonal communication. An employee may wonder if he or she had the same access to information as his or her peers, if the rules and procedures were explained to him or her as well. In the case of pay rise decisions, this dimension may, for example, refer to the way in which a team manager communicates on the raise criteria to each member of his/her team.

However, these four dimensions can be linked to the myth of objective quantification, which guarantees a more positive perception on each dimension (Table 2.1).

Table 2.1. The influences of the myth of objective quantification on perceived justice

Dimension of perceived justice Influence of the myth of objective quantification
Perceived procedural justice A decision taken on the basis of a quantified indicator and therefore considered as objective is perceived as fairer
Perceived distributive justice Facilitated consistency between expectations and prognoses of employees and the decisions taken
Perceived interactional justice Depersonalization of decisions made, less importance on interpersonal relationships
Perceived informational justice Explicitation of criteria made easier by reducing complexity

Once quantification is perceived as objective, a rule or procedure consisting of taking decisions on the basis of quantified indicators will be perceived as fairer (procedural justice). In addition, basing decision-making on quantified indicators reduces uncertainties and facilitates the construction of prognoses for the decision. Thus, a seller who knows that their bonus depends closely on the sales they make simply has to monitor their own sales to know what kind of bonus they will receive. This ensures that they build expectations consistent with what they will ultimately receive, and ensures that they will perceive the decision as fair (distributive justice). Second, as previously shown, the perceived objectivity of quantification implies a form of depersonalization by reducing the role of the evaluator and introducing a form of standardization (Porter 1996), which can improve perceived interactive justice. For example, once an employee knows that their manager ultimately has little room for maneuver in a promotion decision affecting them – because this decision is based above all on quantified indicators – they cannot suspect that their relationship with this manager comes into play in the decision-making process. More generally, the introduction of a form of depersonalization reduces the importance of interpersonal relationships. Finally, as pointed out, quantification responds to a logic of reducing the complexity of reality, by measuring only a part of this reality, i.e. by representing in only a few dimensions an infinitely more complex reality. This reduction in complexity facilitates the communication and clarification of decision-making criteria (informational justice). It is therefore easier for a manager to explain to an employee that the decision to promote is based on x criteria defined in such a way, than to explain an evaluation based on an overall perception of behaviors, actions and skills.

2.1.3.3. Quantification and reduction of the possibility of criticism

Ensuring a certain level of perceived justice therefore reduces the opportunities for questioning the decisions taken. Quantification tools further reduce these opportunities, for at least two reasons. First, the dominant managerial discourse mobilizes both rationalization rhetoric and normativity rhetoric (Barley and Kunda 1992) to support the illusion of objective quantification. Rationalization rhetoric emphasizes the scientific and methodological guarantees related to quantification, while normativity rhetoric emphasizes the need for objectivity and transparency to provide a peaceful working environment. It can therefore become difficult for organizations and individuals alike to resist both types of rhetoric. Second, quantification tools reduce the questioning of decisions taken, especially as they become more complex. Indeed, statistical complexity sometimes produces side effects that prevent individuals from questioning a numerical result or its interpretation (Box 2.7).

More recently, the emergence of algorithms, which sometimes constitute “black boxes” (Faraj et al. 2018), has made this issue even more acute. Indeed, the impossibility of accessing the principles of algorithm construction prevents both criticizing the results and playing with them (Christin 2017) which leads to a significant loss of autonomy and room for maneuver for employees. Thus, a worker may, to some extent, manipulate a rating system with which he or she is familiar and whose criteria and measures he or she knows, as do the medical staff described by Juven (2016), who choose some codifications rather than others to ensure the budgetary balance of their institution. This possibility disappears, or at least decreases considerably, when the criteria and principles for constructing metrics are not known or are difficult to understand.

The notion of objectivity therefore makes it possible to establish a first link between quantification and decision-making. Even though the myth of objective quantification has given rise to many criticisms and challenges, its persistence in the HR field ensures that it can justify decisions that can have a significant influence on the future of employees, and ultimately reduce the possibility of these decisions being challenged.

2.2. In search of personalization

The link between quantification and decision-making is also based on the notion of personalization. While for a long time statistics was positioned as a science of the impersonal and large numbers, algorithms now offer the promise of a form of taking into account the individual through quantification. This contributes to the evolution of the positioning of the HR function, which has long been based on impersonal or segmented employee management.

2.2.1. Are we reaching the end of the positioning of statistics as a science of large numbers?

It is with Quetelet that statistics begins to be constructed as a science which, starting from data on multiple individuals, succeeds in producing unique measures (Desrosières 1993). Statistics, the science of quantification, is then defined in opposition to sciences based on the observation of individual cases (Desrosières 2008a). The notions of large numbers, averages and representative samples, which structure the methodology and mathematical validity of the vast majority of statistical laws, are thus part of a vision of statistics as a science, dealing with large groups of individuals. However, this historical positioning is now being undermined by the emergence of quantification aimed at personalization and better consideration of the individual.

2.2.1.1. Statistics, the science of the collective and large numbers?

Before Quetelet, scientists like Laplace or Poisson were always interested in individuals. Quetelet, on the other hand, mobilizes statistical rules to produce new, societal or at least collective objects, and no longer individual ones (Desrosières 1993).

The history of the notion of the average, recounted by Desrosières (1993, 2008a), gives a good account of this movement. Indeed, the notion of average accepts two definitions. First, it refers to the approximation of a single magnitude (e.g. the circumference of the earth) from the aggregation of several measures of that magnitude; it also refers to the creation of a new reality from the aggregation of the same measure over several individuals (e.g. the average size of human beings). It was mainly Quetelet the astronomer who showed the possibility and interest of the second type of average based on the fiction of the “average man”. Quetelet took physiological measurements of his contemporaries (height, weight, length of limbs, etc.) and observed that the distribution of these measurements was based on a bell curve (later called the normal law). He deduced from this the existence of an “average man”, bringing together all the averages of the measurements made. This second type of average has thus allowed the emergence of new measurement objects, no longer related to individuals but to society or the collective. However, its dissemination has come up against heated debates linked to the deterministic and fatalistic vision that this definition of the average seems to imply, and in contradiction with the notions of individual free will and responsibility. Moreover, it has also given rise to practical controversies, for example in the field of medicine, between those in favor of case-by-case medicine, in which the doctor bases his diagnosis on the knowledge of each patient, and those in favor of “numerical” medicine, mobilizing the observation of interindividual regularities to establish diagnoses (Desrosières 1993). Despite these debates, this definition of the average has gradually become central in statistics, under the influence of works such as those of Galton.

Desrosières also points out that at the end of the 19th Century, Durkheimian sociology helped to strengthen this position by using statistics to identify regularities, i.e. average behaviors. What is more, Durkheim initially aligned the notion of the average man with that of normality or the norm, and then presented deviations from the mean as pathologies. However, in Le Suicide, he revisited this idea, distinguishing the notion of the average type from that of the collective type. While he considers the average man (the average of individual behaviors, for example) to be a rather mediocre citizen, with few scruples and principles, he defines the collective man (understood as the collective moral sense) as an ideal citizen, respectful of the law and others. However, whatever the philosophical or epistemological significance given to the notion of average, Durkheim did indeed base his remarks, analyses and theories on the calculations of average and statistical regularity. In this respect, he contributed to the positioning of statistics as a science of the collective and not of the individual.

Beyond the notion of average, several methodological foundations are necessary to ensure the validity of a large part of statistical results, and these methodological principles contribute to positioning statistics as a science of the collective and not of the individual. A return is made here to two foundations of statistical inference rules, namely the possibility of generalizing results obtained on a sample, to a larger population (Porter 1996): the law of large numbers, and the notion of a representative sample.

The law of large numbers makes it possible to ensure a correspondence between a random sample and a target population (Box 2.8).

The notion of representativeness qualifies the characteristics that a sample must have in order to be eligible for the possibility of generalizing the results (Box 2.9).

These two principles (the law of large numbers and notion of representativeness) therefore also underlie statistics as a science of the collective, insisting on the notion of inference, i.e. the search for the possibility of generalization to entire populations. In this view of statistics, the individual level is referred to as the notion of measurement hazard, a hazard presented as harmful to the quality of the results obtained, and which should therefore be reduced to a minimum.

However, as shown in Chapter 1, recently, the emergence of algorithms has introduced a new promise in relation to quantification, based on the idea that quantification can instead contribute to a better consideration of the individual. Thus, suggestion algorithms (e.g. purchasing and content) are designed to suggest the right product or content to the right person. In Chapter 1, the example of collaborative filtering algorithms was given, which bring individuals closer together based on their history (purchases, content consumption, etc.). This type of algorithm corresponds to a form of personalization in the sense that, theoretically, each individual should be able to receive a unique set of suggestions.

2.2.1.2. Using statistics to customize

This promise, to take the individual into account, is based on several conditions: a large amount of data, good quality data, data updated in real time and the possibility of identifying variables that can be substituted for each other.

The amount of data seems to be a first essential condition for quantification, to allow customization. This notion of quantity is in fact divided into two dimensions. First of all, it refers to the number of variables available and their richness. Thus, the more information the statistician has about individuals, the richer and more varied the possibilities of personalization will be, because the information will allow a greater degree of accuracy. Second, it also refers to the number of individuals existing in the database. Interestingly, in fact, having a large number of individuals also allows for greater personalization, because having more cases once again allows for greater accuracy. One dimension can compensate for the other. In the case of the collaborative filtering algorithms presented above, therefore only the history of individuals is really necessary, which is very thin. On the other hand, to be able to effectively match individuals to each other based on their history, it is better to have a very large number of individuals, to maximize the probability that two individuals will have the same history, or very close histories.

However, the amount of data is not enough, not least because it does not always compensate for poor data quality. This poor quality can be expressed in different ways, including unreliable information (Ollion 2015) or missing information for a large number of individuals. Unreliable information, which is a problem for quantification in general, calls into question the proximity between data and reality. However, measuring reliability remains difficult and the same data can be considered reliable or unreliable depending on the context. Self-reported data on beliefs, behavior and level of education are regularly denounced as unreliable because of the existence of a social desirability bias, which leads respondents to want to present themselves in a favorable light to their interlocutors, and therefore to give the answers that seem closest to the social norms in force (Randall and Fernandes 1991). However, they can be considered quite reliable in cases where the focus is precisely on self-reporting (of beliefs, behaviors, diploma level) by individuals. The reliability of the data is therefore highly contingent.

Data quality can also be threatened by non-response, or the lack of information on a significant number of individuals, which poses a particular problem when quantification is used to personalize since it means that a large number of people will be deprived of this personalization. This problem is found particularly in data from social networks. These data have many “holes” (in the sense that many users do not take any action on social networks), which are impossible to neglect, but also difficult to account for. Indeed, these “holes” come from selection bias (Ollion and Boelaert 2015), in the sense that active and inactive populations on social networks certainly do not have exactly the same characteristics – sociodemographic characteristics for example. On the other hand, data from social networks have a major disadvantage, related to the fact that we do not know the characteristics of the entire population of members: we cannot adjust the samples and avoid this type of bias (Ollion 2015). Beyond the question of representativeness, these “holes” prevent the provision of personalized information to those concerned.

The reliability of the data is partly based on a third condition, close to the criteria of timeliness and punctuality mentioned by Desrosières: the regular, even real-time updating of the data. This condition ensures that the data accurately reflect the situation at a given time t, which becomes all the more important when the reasoning is aimed at the individual level and not the collective or societal level. Indeed, at a collective level, variations are slowed down and reduced by inertia linked to mass (thus, monthly variations in unemployment rates are very small, whatever the country concerned). However, at an individual level, variations can be much faster, as an individual can change status, behavior, representations, almost instantly. This criterion also refers to the “velocity” characteristic highlighted by Gartner’s report on Big Data. Beer (2019) thus underlines the importance of speed and “real time” as the basis of the data imaginary.

The ability to use quantification to personalize also depends in some cases on the ability to identify variables that can be substituted for each other (Mayer-Schönberger and Cukier 2014). Thus, a content suggestion algorithm must identify which content may be appropriate for which individual. The most effective way to do this would probably be to have information about the individual’s tastes and preferences. However, this type of variable is rarely observable. The algorithm must then find surrogate or proxy variables, i.e. observable variables correlated to tastes and preferences (which are unobservable). The history of content consumption plays this role as a surrogate variable in collaborative filtering algorithms.

Quantification can indeed be used for customization purposes, as long as a few criteria are met. The example of targeted advertising provides a concrete illustration (Box 2.10).

In addition, beyond these data requirements, the promise of personalization through quantification leads to several changes: epistemological, methodological and practical.

First of all, from an epistemological point of view, this implies renewing the measurement of the relevance of methods and models. Indeed, the measurement of the relevance of a statistic produced in order to report a collective phenomenon is measured according to several factors: the homogeneity of the population which ensures that a measure such as the mean makes sense (obtained from the examination of variance for example), the verification of statistical assumptions related to the statistical laws used, the meaning and interpretation that can be inferred of the statistics. In quantification aimed at personalization, a relevance measurement will seek to reflect the consideration of each individual, which instead implies some interindividual variability and may require, for example, taking into account individual feedback.

From a methodological point of view, the possibility of taking into account individuals’ feedback about the relevance of a result that concerns them seems valuable when quantification is aimed at personalization; whereas this possibility is almost never explored when quantification is aimed at a collective level. Asking for feedback from individuals enables the relevance and quality of the models to be measured, as has been shown, but also to improve the quality of customization (Box 2.11).

From a practical point of view, using quantification to personalize requires not anonymizing the data (or at least requires the possibility of tracking the same user), which creates new issues related to the protection of personal data, as discussed in Chapter 5.

The use of quantification for a customization objective corresponds well to a break with the traditional positioning of statistics. This break is embodied in new criteria of rigor, quality and relevance, and in changes on epistemological, methodological and practical levels.

2.2.2. Personalization: a challenge for the HR function

Personalization through quantification is now a real challenge for the HR function. Initially coming from targeted marketing, the notion of personalization has gradually entered the HR function, and has generated a certain interest, among other things, in the trend of “HR marketing”.

2.2.2.1. A model from marketing

During the 20th Century, marketing developed the idea of taking consumers’ needs into account and adjusting to them (Kessous 2012b). This has required the industrial world to renew its operating methods, which were based in the first half of the 20th Century on the notion of maximum product standardization. Thus, in the automotive sector, this has been achieved by combining a standardized basic product with options that can be added at the customer’s request (functionalities, color, etc.).

The first evolution toward adjustment to client needs has been in the field of quantification through the use of segmentation techniques, which make it possible to divide a population into several groups of clients with homogeneous needs and expectations (Kessous 2012b). These segmentation techniques, described in Box 2.12, are based in particular on the sociodemographic characteristics of clients and on the assumption that two people with similar sociodemographic characteristics will have similar expectations of the brand.

More recently, the development of loyalty card systems has enabled brands to record not only sociodemographic characteristics, but also customers’ purchasing histories. This new data have made it possible to introduce a form of personalization (Kessous 2012b): offering coupons to customers for a specific product that they rarely buy, for example. The growing success of online commerce then made it possible to collect even more precise data on purchasing habits via the traces left by Internet users. This has led to the emergence of behavioral marketing, which aims to record all actions carried out online and then make suggestions for purchases or content (Kessous 2012a). The targeted advertising model is so profitable that new business models have emerged: offering free access to services in exchange for user data recovery, and remuneration by advertisers.

Marketing has therefore been part of a progressive movement from segmentation to personalization. This movement is embodied in particular by a change in quantification conventions (Box 2.12).

Marketing, then, clearly enacts today the principle of personalization through quantification. Recently, however, the notion of “HR marketing” has emerged (Panczuk and Point 2008). The aim is to apply marketing methods and techniques to the HR field, both for candidates and employees, with a view to attracting and retaining employees. The logic of personalization, resulting from marketing, can be introduced in this movement in HR.

2.2.2.2. The misleading horizon of an individualizing HRM?

Like marketing, HRM can give rise to different forms of personalization (Arnaud et al. 2009): collaborative, adaptive, cosmetic and transparent personalization. Collaborative personalization is based on the employee’s expression of his or her individual needs, then their consideration by the company. Adaptive personalization refers to the adaptation of HRM practices to each employee: individualized schedules, early retirement formulas or work choice tools. Cosmetic personalization refers to the fact of offering the same service to all employees, but with a different presentation according to the employee’s profile. Transparent personalization consists of offering each employee unique services, based on his or her preferences, without the employee having to express them. The authors do not focus on the particular case of customization allowed by quantification, but it seems that it is closer to this fourth type. The examples given in Chapter 1 (Box 1.17) make this point by showing what forms personalization can take through quantifying in HRM: algorithms for personalized job suggestions in the context of internal mobility, training and career paths, for example.

The introduction of the concept of HR personalization is not without its difficulties. Indeed, classically, as Dietrich and Pigeyre (2011) point out, HRM has positioned itself as a management activity based on different segments (e.g. type of contract or status), and has given relatively little prominence to the idea of taking into account individual needs and expectations. However, Pichault and Nizet (2000) have identified the existence of a form of HRM that is described as “individualizing”. This form of HRM is characterized by the establishment of interindividual agreements on the acquisition and enhancement of skills, and seems particularly suitable to organizations with multiple statuses. However, it also requires reflection and work on organizational culture in order to compensate for interindividual differentiation through integrative mechanisms.

One would think that customization through quantification would fit into this model, however, this assumption does not stand up to scrutiny. Indeed, the individualizing model implies an interindividual negotiation (e.g. between the manager and the employee) over the employee’s needs, his/her recognition, etc. It therefore places great emphasis on interpersonal relationships and the expression of needs by employees themselves (such as the collaborative personalization mentioned above). However, this relational dimension is most often absent from the personalization devices through quantification mentioned above (suggestion algorithms, for example). Moreover, although personalized, quantification always introduces a form of standardization: even if each employee receives a unique set of job suggestions, all employees remain subject to the same process of collecting data and sending job suggestions.

Personalization through quantification is an evolution for both statistical science and HRM. Quickly adopted in the marketing field, it is still spreading tentatively in HRM, supported by the growth of HR marketing. While this customization does not refer to a complete change in paradigm or HRM model, it does require adjustments within existing models, and may perhaps help to blur the distinctions between them.

2.3. In search of predictability

The link between quantification and decision-making is being challenged by the emergence of so-called predictive approaches. These approaches thus modify both the positioning of statistics and the HR function.

2.3.1. Are we heading toward a rise in predictability at the expense of understanding?

Historically, the science of statistics has positioned itself as a science that measures the past or present in an attempt to understand and explain it. However, the rise of so-called predictive approaches calls into question this positioning by introducing an almost “prophetic” dimension (Beer 2019). The notion of prediction also raises questions about the effect of statistics on reality, in relation to the notion of performativity.

2.3.1.1. Statistics, the science of description and explanation, but also of prediction?

Initially, and even if national histories may differ on the subject, European statisticians have generally focused on the question of measuring human, social, individual or collective quantities (Desrosières 1993). Although now they appear trivial, population census operations have contributed greatly to the emergence of statistical science since the 18th Century, which explains its name2. At that time, statistics had several characteristics. It aimed at descriptive knowledge, i.e. it aimed to describe the world through numbers. Moreover, it was synchronic, in the sense that it gave an image of this world at a given moment t (unlike history which is interested in developments and the reasons for them).

More recently, the rise of econometrics and modeling has helped to highlight another goal, that of explanation, i.e. the search for cause-and-effect links between different measurable phenomena. Even if this objective could already be seen in the 19th Century in Galton’s study of heredity or in Durkheim’s study of the social determinants of suicide, it became almost unavoidable in the 20th Century. It is in particular the crossing between, on the one hand, the progress made in the field of probability theory and, on the other hand, the concern to be able to model reality that allows the birth of modern econometrics, which aims to confront economic or sociological theories with empirical data (Desrosières 1993) but also to highlight causal relations (Behaghel 2012). The Econometric Society and its journal (Econometrica, founded in 1933) affirm that econometrics aims to understand quantitative relationships in economics and to explain economic phenomena. The challenge of identifying causal relationships obviously comes up against the difficulty of proving that the relationships are indeed causal, and not just simultaneous. The development of the reasoning “all other things being equal” (or ceteris paribus) supports the process of identifying causes, by making it possible to control third-party variables. More precisely, two variables may appear artificially linked to each other by the fact that they are both linked to a third variable, and the “all other things being equal” reasoning makes it possible to control this third variable and thus eliminate these cases. However, this methodology is not sufficient to prove a causal relationship, i.e. an anteriority relationship, between two variables. Behaghel (2012) traces the multiple methodological developments made in the 20th Century to make it possible to identify causalities.

The first development is called a structural approach. This approach requires preliminary modeling, often based on theoretical models, links between variables and the identification of three types of variables: a variable to explain, an explanatory variable and an instrumental variable, which has an impact on the explanatory variable but no direct impact on the explained variable. However, the entire validity of this approach is based on theoretical assumptions formulated ex ante on causalities between variables (and in particular on the central assumption that the explained variable is not influenced by the instrumental variable). The second development, temporal sequences, is based on the observation of temporality: if a variation of a variable X occurs before the variation of a variable Y, then it is assumed that X has an effect on Y. However, again, this approach requires a fundamental assumption of a link between X and Y, and the possibility of excluding third variables. The third development, increasingly used in public policy evaluation, is the experimental model. It is based on the implementation of controlled experiments, with a comparison of a test group and a control group (as in medical studies on the effects of drugs).

These three cases are linked in a common aim, that of identifying causal relationships and explaining the phenomena observed. They also point out that the epistemological paradigm of econometrics is essentially based on an approach that mobilizes a theory, which allows hypotheses to be formulated and tested, hence the notion of modeling. This is therefore a hypothetical-deductive approach (Saunders et al. 2016). Moreover, the econometric approach is most generally part of a positivist epistemological paradigm (Kitchin 2014), which assumes the existence of a reality independent from the researcher, and which can be known (and in this case measured).

However, another purpose may sometimes have been assigned to quantification during the 20th Century and has recently gained prominence: prediction. In fact, the psychotechnical tests mentioned in this book’s introduction are intended to measure human skills, but with the aim of predicting behaviors and performance levels. One of the criteria for the relevance of these tests is based on their ability to effectively predict the success of individuals. They are therefore a first step toward the search for prediction. Similarly, in most Western countries, administrations regularly provide forecasts (of growth, employment rates, etc.) for the coming year or years (Desrosières 2013). More recently, the emergence of so-called predictive analysis, particularly in relation to Big Data, has contributed to the dissemination of this concept (Box 2.13).

Predictive statistics are therefore based on the same methods as descriptive or explanatory statistics. Despite this, it represents important changes in epistemological and methodological practice.

On the epistemological level, the notion of prediction introduces three evolutions. First, it implies a lower interest in the meaning and interpretation of the model, linked to a focus on its predictive quality. In explanatory statistics, it is essential to be able to interpret the model from the perspective of causality, and to be able to literally reproduce the links between the variables. This explains why theoretical models have been used to determine a priori the meaning of the links between the variables. However, the focus on predictive quality results in a decrease or even disappearance of the importance of theoretical models (Kitchin 2014; Cardon 2018). Indeed, if the only purpose of a model is to predict at most a variable Y, and if the computational powers and the quantity of data are such that all the relationships between a large number of variables can be tested, why bother with models that would lead to preselecting the variables considered relevant ex ante? Box 2.14 reflects these two concomitant developments. The rise of predictive analysis introduces essential questions about the notion of individual free will, and pits two discourses against each other; one convinced that the majority of human behavior is predictable, the other that human beings always retain a form of unpredictability.

Methodologically or practically, the relevance of a predictive statistic is not measured in the same way as that of a descriptive or explanatory statistic. Indeed, the focus is on predictive validity, not on the adequacy of a model to theory or data (Box 2.15).

During the 20th Century and at the beginning of the 21st Century, statistics, long confined to descriptive and explanatory dimensions, were given a new objective: that of prediction. This goal has grown significantly in recent years, particularly in HR, in line with the promises associated with Big Data. Managerial discourse now distinguishes between so-called decisional analysis, which corresponds to the use of data to make decisions, and therefore to the EBM approach, so-called augmented analysis – which refers to tools capable of interpreting data to facilitate understanding – and so-called predictive analysis – which aims to anticipate behavior based on trends (Baudoin et al. 2019). Although based on the same quantitative methods, this new purpose nevertheless introduces relatively significant epistemological and practical changes.

2.3.1.2. Prediction or performativity?

The question of prediction refers to the possibility of anticipating the occurrence of an event in reality. The positivist paradigm can fully accommodate this objective. However, the constructivist approach, which challenges the idea of reality independent of the measurement made of it, leads to a more cautious stance on this notion of prediction.

Indeed, many studies have highlighted the performativity of quantification, namely the fact that quantification has an effect on reality (see Callon’s (2007) work on the economy). This performativity can take many forms (MacKenzie 2006). Generic performativity is the simple use of quantification tools (methods, metrics and results). For example the use of quantified measurements of worker performance constitutes a change in HR practices. Effective performativity refers to the effect of this use on reality. Within effective performativity, so-called Barnesian performativity characterizes situations where reality is modified in the direction provided by the quantification tool. The notion of a self-fulfilling prophecy is a good illustration of this type of performativity: the very fact of making a quantified prediction about an event makes it happen. Counterperformativity, on the other hand, refers to cases where reality is modified in the opposite direction to that provided by the quantification tool. However, these definitions of the performativity of quantification remain relatively theoretical. For reasons of clarification, the effects that quantification can visible have on reality are categorised below.

First, quantification creates a new way of seeing the world and makes objects that may not have been so before (Espeland and Stevens 2008). Espeland and Stevens (1998) give the example of feminists who have sought to measure and value unpaid domestic work, so as to highlight inequalities in the distribution of wealth, on the one hand, and domestic tasks, on the other hand. Similarly, the work of Desrosières (1993, 2008a, 2008b) or Salais (2004) gives multiple examples of categories of thought created by quantification, from the average man already mentioned to the unemployment rate, including inequalities.

Second, quantification can lead individuals to adopt certain behaviors, what Espeland and Sauder (2007) refer to as “reactivity’, and Hacking (2001, 2005) as loop or interaction effects. Espeland and Sauder give the example of rankings (e.g. academic institutions) and show how members of institutions and the institutions themselves adapt their behavior according to the ranking criteria. In another register, an algorithm that suggests content (or posts, or training) to an individual can induce behavior in that individual (to follow the suggestion). Hacking shows how certain human classifications produce a loop effect, because they contribute to “shaping people” (Hacking, 2001, p. 10). The literature is also particularly rich on what can be described as the perverse effects of quantification, corresponding to situations where, in response to quantification, individuals adopt behaviors that are at odds with the objective of the quantification. This is particularly the case for quantified work and performance evaluations. Teachers assessed on their students’ scores on a standardized test may adopt deviant behaviors (cheating for example) or even focus all their teaching on learning (by heart eventually) the answers to the test, thus moving away from their fundamental mission of transmitting intellectual content (Levitt and Dubner 2006). Similarly, hospital doctors evaluated on the number of patients treated may be tempted to select the easiest patients to treat, or to reduce the quality of their care (Vidaillet 2013). The latter strategy can lead to an increase in patient return rates to hospital, which ultimately defeats the purpose (cost reduction).

Quantification can directly modify the real world without going through the intermediary of individuals. Matching algorithms are a good example of this form of performativity (Roscoe and Chillas 2014), especially when they no longer leave room for human intervention. For example, a recruitment algorithm that would be entirely left with the pre-selection decision of the candidates, without human intervention, acts directly on the real situation (through the selection of candidates), without the need for a human intermediary.

These different examples therefore illustrate the multiple effects that quantification can have on reality. Therefore, it seems illusory to define predictive analysis as the only anticipation of future states of reality, regardless of the quantification performed. By predicting a future state, quantification influences, directly or indirectly, the probability of that state occurring.

2.3.2. The predictive approach: an issue for the HR function

These conceptual debates should not obscure the fact that the predictive approach is now an issue and represents changes for the HR function. First of all, it is part of an attempt to renew the relationship between HR and employees. Then, it recomposes the relationship between the HR function and the company’s management.

2.3.2.1. An issue in the relationship with employees

Employees, being consumers, are more and more used to algorithmic tools that anticipate their wishes and needs: Amazon recommends products, Deezer music, Netflix movies, etc. In addition, some players have already invested in the HR field. As seen in Chapter 1 (Box 1.17), LinkedIn already seeks to anticipate the career development wishes of its members and suggests positions or training. It would therefore be conceivable today that an employee working in a given company could receive, via LinkedIn, an offer for a position in the same company, which amounts to a form of outsourcing, or even uberization of internal mobility. Employees can thereofore expect the same form of proactivity from their company’s HR function (Box 2.16).

In the context of a crisis of legitimacy of the HR function, often suspected of focusing on the most administrative aspects and cost reduction, adopting this predictive approach can therefore send the signal of an HR function that is indeed concerned about employees and their development.

2.3.2.2. An issue in the relationship with the company management

In addition, the HR function has a strong interest in developing a proactive approach and stance. This is the ambition of forward-looking policies, for example around the forward-looking management of jobs and skills. Today, these policies are based on trend projections and assumptions about market developments to identify not only the key skills of tomorrow but also the jobs that will need to be recruited for. This type of policy can benefit significantly from predictive analysis, which could improve the accuracy of measuring changes but also recruitment needs, for example, through resignation prediction algorithms (Box 2.17).

The predictive approach has at least two benefits for the company. The first is based on the possibility of implementing corrective actions upstream if necessary: strengthening policies to prevent absenteeism before periods when high absenteeism is expected, for example. The second is the possibility of anticipation (such as providing reinforcements in teams and periods at high risk of absenteeism).

Integrating a form of this predictive approach into decision making represents a change in stance and an improvement for the HR function, both in its relations with employees and with the company. It is therefore easier to understand the success of this type of approach in the managerial literature, for example.

This chapter therefore focused on the question of the link between quantification and decision-making, and explored three components of this link. First, the myth of a quantification that allows more objective decisions to be made was documented and questioned. Then two changes were studied. The first concerns the use of quantification for personalization purposes (and therefore decision-making related to individuals) and the second concerns prediction purposes (and therefore decision-making related to the future). These two developments introduce epistemological and possibly methodological changes for statistical science, and changes in stance and logic for the HR function. They are also part of an idealized vision of quantification, based on the idea that quantification improves decision-making, whether it is directed toward individuals or toward the future. This therefore underscores the persistence of the myth around quantification and raises the question of the effects that challenging this myth could have within organizations.

  1. 1 It should be emphasized that this work focuses only on perceived justice, not real justice, and raises little question about the relationship between the two.
  2. 2 The word “statistics” comes from the German Statistik, a word coined by the economist Gottfried Achenwall, which he defines as the body of knowledge that a statesman must possess. The roots of this word therefore underline the preliminary intertwining of government and statistics (Desrosières 1993).
  3. 3 See https://www.businessinsider.fr/us/workday-predicts-when-employees-will-quit-2014-11 (accessed October 2019).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset