3
How are Quantified HR Management Tools Appropriated by Different Agents?

The quantification tools used in HR can be considered as management tools (Chiapello and Gilbert 2013). Yet, the dissemination and appropriation by the various actors of a management tool is not necessarily immediate. Thus, Vaujany (2005) recommends looking from different angles to understand the appropriation of a management tool: designers, on the one hand, and users, on the other hand. He shows that there may be a gap between the vision of designers and that of users, and that the use of a management tool always constitutes a gap compared to what was expected by designers. In this case, the designers of HR quantification tools are extremely diverse, from the HR actors themselves to data experts, consulting firms or researchers, for example.

In this chapter, dedicated to the appropriation of quantification tools, beyond this dichotomy between designers and users, it seems important to me to distinguish, albeit roughly, between management actors (management and HR function), on the one hand, who may have an interest in disseminating quantification tools, and employees and their representatives, on the other hand, who may sometimes be reluctant. Indeed, historically, the management or HR function has regularly used quantification as a rationalization tool (section 3.1). Conversely, employees may show a certain distrust of these tools. This mistrust may relate to data collection and processing (section 3.2), which refers more to a psychocognitive dimension, but also to decision-making based on it (section 3.3), which refers more to a sociopolitical dimension. It should be noted, however, that this very crude and schematic distinction should not overshadow the many variations that can exist between individuals, but also between organizations and between sectors. Thus, a hypothesis can be formulated: employees are more convinced of the potential benefits of quantification, but are also more aware of its limitations, in companies in the technical sectors with numerous engineering employees.

3.1. The different avatars of the link between managerial rationalization and quantification

Management and the HR function have regularly shown their interest in rationalizing the organization. In the organizational field, rationalization aims at organizational efficiency and the optimization of human resources management. However, this interest in rationalization has been reflected on several occasions since the beginning of the 20th Century in the mobilization of quantification tools (Salais 2016). The example of Taylorism was already given previously: Taylorism has relied essentially on the measurement of work to rationalize it. Three new examples structure this section. First, the bureaucracy as studied and presented by Weber and then Crozier; second, New Public Management (NPM), extensively studied by sociology and management; finally, more recently, algorithmic management.

3.1.1. Bureaucracy

Among those three examples, bureaucracy was the first to occur historically. Initially reserved for administration, bureaucracy has spread to other organizations. However, its characteristics, analyzed in particular by Weber (1971) and Crozier (1963), remain the same. The use of quantification, although rarely mentioned by these two authors, may constitute one of the characteristics of bureaucracy, particularly because quantification embodies a form of rational-legal authority.

3.1.1.1. The Weberian ideal type of bureaucracy and its extensions

Weber (1971) identified the six principles of the bureaucratic ideal type. First, jobs, tasks and responsibilities are precisely defined. Recruitment and selection for these jobs are based on technical skills, verified by obtaining a diploma or through a competition. Then, these different jobs are integrated into a formal hierarchical structure, which precisely defines the links of submission and authority. In the same way, the rules are formalized and the work is standardized, through rules, codes, methods, as well as through strict control of compliance with these rules. As a result, work relations can and should be impersonal, avoiding both conflict and emotional attachment. Finally, the salary is essentially fixed and depends closely on the job held, and promotions are based in particular on seniority.

The list of these principles makes it possible to identify more general characteristics of this ideal type. First of all, depersonalization is a central issue in bureaucracies. Everything is put in place to prevent the interpersonal dimension from taking precedence over the general order of the structure. Thus, the rules of work, remuneration and promotion aim to avoid excessive submission and dependence on others (the line manager, for example, but also colleagues). Second, standardization is a key element: this makes it possible to ensure the coordination of work in a rigid relational context, contributes to reducing the importance of people and personalities, and ensures that the general structure is maintained. This standardization is implemented and maintained with the use of various tools: organization charts, job descriptions, work control and evaluation, procedures and standards. Finally, the strict division of labor is a final essential feature, which includes recruitment criteria based on technical skills and a clear formalization of tasks, responsibilities and reporting relationships.

Weber (1971) argues that the bureaucratic model represents a definite gain in efficiency. However, this position was challenged by Crozier (1963), who exposes the limitations and flaws that reduce the (economic) efficiency of this model. Thus, standardization and depersonalization do not eliminate power games, territorial struggles or conflicts of interest, which are even further exacerbated by the stability of the system (maintained by the rules). Similarly, formalization does not prevent the emergence of areas of uncertainty that structure power relations and conflicts. Finally, Crozier refers to the establishment of “bureaucratic vicious circles”, constituted by the multiplication of rules that rigidify the organization.

Mintzberg (1982) distinguishes between different bureaucratic forms and identifies the contingency factors that explain an organization’s adoption of one form over another. Thus, it distinguishes between machine bureaucracy, based on work rules and processes, and professional bureaucracy, based on the qualifications and skills of each individual. In both cases, work coordination is impersonal and work is highly standardized.

Finally, Pichault and Nizet (2000) establish a link between this organizational model and the associated HRM model, described as “objectifying HRM”, which is characterized by the predominance of quantification, which corresponds to the following practices, among others: quantitative recruitment planning, evaluation based on standardized criteria measured using quantitative scales, promotion based on seniority or contests, etc.

These different bureaucracy models may differ in some respects, but they are similar in terms of high standardization, depersonalization and strict division of labor. Yet, the use of quantification can promote the emergence and maintenance of these three characteristics.

3.1.1.2. The rational-legal authority of quantification

Quantification contributes to a form of standardization, as the quantification tools are the same for everyone and encourage objects, phenomena etc. to fit into pre-established formats (Espeland and Stevens 1998). As discussed in Chapter 2, Porter (1996) also links the myth of objective quantification to depersonalization, a link based on the idea that a quantification operation reduces the influence and importance of the human being. Taylorism is a particularly emblematic example of the links that can be established between division of labor and quantification.

According to Weber (1971), bureaucracy is also characterized by a particular form of authority or domination: rational-legal authority. This form of domination is based on a belief in the legality and rationality of the rules and authority exercised. More precisely, rationality can be “instrumentally rational” (allowing the effective adaptation of means to the goals pursued) or “value-rational” (corresponding to convictions). In both cases, rational-legal authority is characterized by a form of depersonalization: this explains why Pichault and Nizet (2000) define in part the objectifying HRM model by the notion of rational-legal authority.

Quantification could precisely be seen as one of the avatars of this rational-legal authority. It is characterized by depersonalization, and has, as previously seen, generated a number of myths (including that of objective quantification) that have reinforced the belief in its legality and rationality, and thus conferred significant power on it. Moreover, Weber insists that rational-legal authority is generally based on knowledge and technical expertise, with statistics being part of this body of knowledge, for example (Bruno 2013). Beyond the etymological origin of the word “statistic” already mentioned in the previous chapter, this may explain why quantification tools are regularly used in bureaucracies for different HR or managerial purposes: recruitment, staff appraisal, promotion, etc. Thus, several of the HRM practices that Pichault and Nizet (2000) describe for the objectifying HRM specific to the bureaucratic model are based on quantification: evaluation based on standardized criteria and quantified tools (such as rating scales) and accurately recorded working time, for example.

Thus, bureaucracy, aimed at rationalizing work, can instrumentalize quantification for standardization and depersonalization purposes, but also make it a tool for strengthening rational-legal authority.

3.1.2. New Public Management

More recently, at the end of the 20th Century, public action targeted another form of rationalization, this time directly inspired by methods from the private sector, and which was called New Public Management (NPM). However, the use of quantified tools (indicators, metrics and dashboards, in particular) is one of the central characteristics of NPM (Belorgey 2013a; Remy and Lavitry 2017).

3.1.2.1. Rationalization over time

NPM aims in particular at introducing market mechanisms in the supply of goods and services of general interest, which implies, for example, directing activities and allocating services according to users’ needs and not according to pre-established rules or procedures, whilst also abandoning the specificities of civil servant status and the principles of advancement based on seniority. It also seeks to introduce more transparency into the quality and cost of services, which implies a greater use of evaluation. All this is aimed at greater efficiency in the use of public funds (Chappoz and Pupion 2012).

The concern about the efficiency of public activity is not new, as previously seen with Weber’s work. Moreover, since the first half of the 20th Century, Fayol had been interested in rationalizing the organization of administrations and had already theorized precursor elements of NPM, or at least positioned halfway between the Weber bureaucracy and NPM (Morgana 2012). Thus, he emphasizes the State’s accountability to taxpayers; he advocates remuneration and promotion on the basis of merit rather than seniority; he suggests controlling work through timekeeping and a methodical analysis of work (similar to that proposed by Taylor), which he links to greater transparency regarding the quality and cost of administrative services.

However, NPM is a new doctrine, because it translates a main objective, that of efficiency, into an arrangement of numerous subobjectives that will translate into practices, introducing neoliberalism into the bureaucracy (Bruno 2013). Thus, the subobjective of orienting and allocating activities and services according to users’ needs is reflected in management control practices aimed, on the one hand, at measuring users’ needs and, on the other hand, at measuring the adequacy between the supply of public services and these needs. The subobjective of abandoning civil servant status and promotion on the basis of seniority is reflected in individual assessment practices based on work tests and the implementation of quantified objectives. The transparency subobjective is reflected in practices of evaluating work and activity as well as in the communication of evaluation results (Espeland and Sauder 2007). More concretely, these evaluation practices are most often based on activity indicators, dashboards and rankings, which can then determine the resources allocated to the structures (Belorgey 2013b).

3.1.2.2. The role of quantification in the institutionalization and definition of NPM

These NPM practices are in fact largely based on the use of quantification (Remy and Lavitry 2017). Several concrete elements characterizing NPM are based on measurement practices:

  • – definition and monitoring of activity-related indicators (aiming at the transparency of the costs and benefits of public action);
  • – definition and monitoring of work-related indicators (in particular for the evaluation of staff);
  • – implementation of systematic procedures for a quantified evaluation of the effects of public policies;
  • – mobilization of benchmarking, particularly internationally.

Several examples have already been given of the use of indicators to measure work or activity in jurisdictions. In the French hospital sector, for example, activity is measured by indicators related to the number and complexity of acts performed (Belorgey 2013b; Juven 2016); in agencies helping people to return to work, the construction of indicators aims to measure the rate and productivity of counselors, but also the quality of the service provided to the job seeker, or the maintenance of the employability of the unemployed (Remillon and Vernet 2013; Remy and Lavitry 2017). In another field, at the international level, many studies have focused on indicators to measure the work of researchers and the reputation of their institution, which are used in international rankings, such as the Shanghai Ranking (Box 3.1). Thus, studies have examined the construction of indicators (Altbach 2015), or the construction of the aggregate measure used to rank institutions (Dehon et al. 2010), or the way in which researchers and institutions adapt their behaviors and practices according to the ranking (Espeland and Sauder 2007).

The quantified evaluation of public policies has led to many methodological developments in the field of statistics and economic measurement. Thus, the implementation of random experiments is one of the most common methods used to isolate and measure the effects of public policies (Bruno 2015). Box 3.2 illustrates the use of this method to evaluate the French policy known as the anonymous CV.

The mobilization of benchmarking is part of a practice of comparing public policies at the international level, particularly at the European level. This practice, which comes from the private world, aims to compare different entities on previously chosen measures (Bruno 2013): it is therefore a good illustration of what a commensuration operation is (Espeland and Stevens 1998). Benchmarking first requires defining the criteria against which the entities will be compared, and then transforming these criteria, which are sometimes relatively broad and conceptual (e.g. the “performance” of a service or the “quality” of a product), into quantitative indicators (Salais 2004). Then, it involves recovering the quantified measurements from the services that can produce them, and comparing the entities with each other on the basis of these measures. Finally, benchmarking generally includes a results communication phase (as part of the transparency sub-objective mentioned above). Finally, benchmarking is both a production of knowledge (through quantitative measures) and a power tool (since it demonstrates that a certain level of performance is achievable by others, see Bruno 2015). The description of a specific example of the implementation of these different steps clearly shows the difficulties faced in setting up benchmarking, and the knowledge and power issues that can be associated with it (Box 3.3).

These various examples clearly show the central role played by quantification in institutionalization and even the definition of NPM. This central role is explained by the qualities attributed to quantification, which have already been mentioned: quantification is perceived as a tool of transparency, objectivity and neutrality, which in turn promotes efficiency and is therefore part of a rationalization process. The questioning of this myth and the observation of clandestine or deviant practices demonstrating the possibility of the instrumentalization of quantification (Remy and Lavitry 2017) have thus far not been sufficient to reduce its force in public discourse.

3.1.3. Algorithmic management

Finally, as seen in Chapter 1, the recent development of platforms for direct contact between customers and service providers has led to the emergence of algorithmic management, which can be considered as a form of rationalization taken to the extreme.

3.1.3.1. Algorithmic management and its challenges

Algorithmic management corresponds, as previously seen, to a situation where an individual’s work is entrusted to him/her by an algorithm, and not by a human being (manager). In extreme cases, the algorithm is also used to assess the quality of work. This leads to a total disappearance of the role of the manager as we know it. Chapter 1 already gave examples of management situations by algorithms (Uber drivers, Deliveroo couriers, Turkers, Google raters). Box 3.4 lists some of the questions raised by this new type of management.

3.1.3.2. Extreme rationalization?

Algorithmic management raises a large number of questions. Beyond these questions, it seems that this is an extremely rational approach. Indeed, it has several rationalization characteristics: efficiency, cost and staff reduction, standardization.

The search for efficiency is seen in particular in the concern to reduce the time spent on each task and break time. Thus, the Uber algorithm, which indicates to the driver the route to follow, aims to reduce the duration of the journey, by integrating, for example, information related to traffic and possible roadworks, and by proposing instantaneous detours. Similarly, when the driver has finished the journey, they are immediately offered another journey, when possible, and close to the finish point of the last journey. This reduces the journey times without customers, which can represent forms of break time.

The objective of cost reduction is visible through several features of these platforms. First, most workers provide their own work tools (car, bicycle, computer, Internet connection), which reduces equipment costs accordingly. Second, the almost total disappearance of management line reduces staff costs. Finally, most of these platforms play or have played for several years on the ambiguities of national laws, ensuring that workers are not technically employed by them, which reduces employer charges and offers more flexibility in labor management.

The use of highly prescriptive algorithms tends to standardize work. Uber drivers follow the indicated itinerary and therefore all work more or less in the same way. However, as seen in Box 3.4, workers can regain some room for maneuver and autonomy in areas other than the pure performance of the task, such as in the choice of their working hours or their work tool (interior design of the car for Uber drivers, personal computer choice for Turkers or Google raters, for example).

Finally, during the 20th Century, management and the HR function were able to use quantification as a rationalization tool. The links between managerial rationalization and quantification may have evolved and been reformulated, but the three examples clearly given highlight their existence. Embodying a form of rational-legal authority characteristic of bureaucracy, quantification appeared to be part of the NPM definition. Finally, it is inseparable from algorithmic management, which is essentially based on quantification tools. In these three examples, quantification is used as a rationalization tool, aiming, for example, at efficiency and cost reduction, until it reaches a form of paroxysm in algorithmic management.

However, this encourages me to revisit the notion of rationalization and to reconsider Berry’s (1983) distinction between universal rationality and local rationalities. Universal rationality can be defined as the rationality of the organization, transcending individual points of view and dissensions between departments for example, while local rationalities reflect these dissensions. The examples of the Shanghai Ranking (Box 3.1) and European social benchmarking (Box 3.3) clearly illustrate the difficulty of reconciling these two levels (the corresponding universal rationality in both cases at the international level, local rationalities referring to national levels or research institutions). The standardization provided by quantification, which seeks to reconcile the different levels, often amounts to making one level prevail over the others. Algorithmic management gives the example of a situation where universal rationality (the efficiency objectives of the company, Uber for example) totally prevails over local rationalities, by erasing, among other things, the possibilities of contestation and collective constitution at the local level, due to the virtual disappearance of interpersonal and collective labor relations. This movement of crushing local rationalities then passes through a form of “datacracy” (Bruno 2013), i.e. a situation where power is held by those who possess the data, or even delegated to algorithms for processing these data (Cardon 2018).

Beyond the notion of rationalization, this first section also shows to what extent the use of quantification embodies a form of technical expertise that may have contributed to the professionalization or at least to the professional identity of the HR function (Dubar 1998). Indeed, professional identities are often understood in terms of power relations and the perception of the position of each actor within the organization (Sainsaulieu 2014), and it seems, in the examples given, that the quantification mobilized in the service of managerial rationalization can constitute a power tool at the service of management and the HR function.

3.2. Distrust of data collection and processing

While management and the HR function can appropriate quantification as a tool for rationalization, employees and their representatives do not necessarily share the same concerns. This is partly due to a very different role in the quantification infrastructure: employees become providers of their data, while management and the HR function are rather the users. However, providing data requires a certain amount of trust in the company and in the way it may process it. Today, it seems that a form of mistrust is fueled by two fears on the part of employees and their representatives: fear linked to the aims pursued by the company, and fear linked to the idea that figures could be “made to say anything”.

3.2.1. Providing data, not such a harmless approach for employees

While many social networks and digital services are based on the principle of providing data in exchange for a free service, which creates a routine form of providing data, companies may encounter difficulties in this area. Employees may be reluctant to provide their personal data, even those with a professional dimension, to their company. This is due, among other things, to the lack of visibility regarding the objectives pursued by the company and the potential gains for employees.

3.2.1.1. One observation: employees hesitate to provide their data to the company

The current deluge of digital data, which is widely documented (Beer 2019), is due in part to the emergence of a new economic model. This model consists of providing a digital service (messaging, access to a social network, access to an application) in exchange for data. In other words, the user does not pay for the service in hard cash, but with his or her data. Second, the company providing the service is paid by advertisers, who see these data deluge as an opportunity to offer targeted ads, perceived as more effective than non-targeted ads. Most Google services, from Gmail to Google Maps, work on this model, as do most social networks, from Facebook to LinkedIn.

This means that individuals are used to providing their data in exchange for a service. Moreover, some services are only of interest if the user provides their data, and their interest for the user increases with the amount of data provided (Beuscart et al. 2009). Thus, a user who does not share any content on Facebook or Twitter probably obtains less benefit from the social network than a particularly active user. Moreover, a user registered on LinkedIn but who has not uploaded information on their professional background or skills totally loses one of the interests of the network – that of being identified by recruiters. In addition, several of these networks have implemented gamification strategies to encourage users to complete their profiles as fully as possible. On LinkedIn, this takes the form of points and profile levels based on the data completed. A study conducted on the Flickr photo and video sharing site reports on this trend (Box 3.5).

All these incentives to produce and provide data contribute to a very large increase in the volume of existing data, structured or unstructured, which gives extremely variable information about the shared content itself, such as photo tags, but also about individuals, such as purchases on Amazon (Box 3.6).

Individuals are therefore used to providing their data, and few are concerned about the sometimes unprotective conditions of use of the platforms that store them. However, recent data use scandals (e.g. the Facebook–Cambridge Analytica data leak) have led to greater, albeit still fledgling, awareness.

Yet, it seems that companies have a real difficulty in succeeding in getting their employees to voluntarily provide their personal data. This is an important issue. Few companies have data on the individual skills of their employees. More precisely, most large companies have a competency dictionary that combines jobs with skills, and know within which trade each employee is positioned, which can give an idea of each employee’s skills. But this idea remains rather theoretical and is subject to several factors of imprecision. Indeed, an employee may have many more skills than his or her job requires, particularly because of training or professional experience; moreover, an employee, particularly a beginner in a job, may not have all the defined skills. Few companies are able to find out the individual skills of their employees at any given time. However, these data can be crucial for certain HR processes, such as job and skills planning, or for establishing a training plan. This data does exist, albeit in a self-reporting form, on networks such as LinkedIn, but companies have difficulty obtaining this type of self-reporting from their employees (Box 3.7).

3.2.1.2. Suggestions for an explanation

Several explanations can help to understand this contrast and difficulty. The first is that the population present on social networks is not necessarily representative of the population of employees in companies. The second is the lack of identification by employees of the services that the company will be able to provide for on the basis of this data. Finally, the third is the fear of how the data will be used.

First of all, the considerable mass of users on social networks and the content exchanged on them should not make us forget that many individuals remain either absent from these networks or inactive. For example, the percentage of Europeans using Facebook, the most popular social network, was 41.7% in 20172. This means that more than half of Europeans do not use it. In addition, some Facebook members use it as a watch tool, i.e. to look at other members’ activity, but don’t have any activity themselves. Besides, the population of social network members is not necessarily representative of the overall population of a country. On Facebook, 18–34 year olds are significantly overrepresented, unlike those over 553. Finally, the active population on social networks, i.e. those who are willing to share and provide their data, is not necessarily representative of the population of a country, let alone of a given company. It is this lack of representativeness of populations active on social networks that makes social science research that is solely based on these data risky (Ollion 2015). This may therefore explain the apparent gap between the sharing and data provision behaviors of individuals on social networks and the difficulty for companies in obtaining the same behaviors internally.

Second, the economic model of social networks and digital models described above implies an exchange between, on the one hand, a service provided to the user without monetary compensation, and, on the other hand, user data through which the company can be remunerated by advertisers. Thus, this model is based primarily on providing a service to users, and if possible a service that they can hardly do without: email on Gmail, routes on Google Maps, for example. Moreover, some companies in this sector have long been or are still unprofitable, due to a time lag between the free provision of the service to users and the collection and stabilization of advertisers’ payments. However, companies still communicate relatively little about the services they will be able to provide for employees with their data. This lack of communication is due to two factors. First of all, some of the uses that can be made using employees’ individual data benefit the company more than employees. For example, identifying individual skills for workforce planning purposes is more a business objective than an employee need. Second, it may be difficult to identify upstream, i.e. even before the data are available, what services can be provided from them. In other words, companies often adopt an approach that is the opposite of that used by digital players. Instead of offering a service that will become so essential that individuals will provide their data almost without even asking themselves questions (smartphone, messaging, etc.), they expect employees to provide their data in advance without having any visibility on what it will bring them. This reversal of logic probably explains a large part of the reluctance of employees to provide their data.

Employees and their representatives may show a certain distrust of the company and the way in which it may use (or even misuse) their personal data. Indeed, examples illustrate the possibility of using data from social networks for disciplinary purposes (Box 3.8).

Beyond this disciplinary dimension, the very fact of having the data is a form of controlling employees. Thus, at the time of recruitment, retrieving information on candidates from their profile on social networks makes it possible to check their compliance with the company’s values and expectations. Foucault (1975) has clearly shown the link between transparency, knowledge and the possibilities of control. Thus, the panoptic form of surveillance (where one person can look at and monitor all the others) described by Foucault corresponds well to a situation where individuals are led to provide a maximum of data on platforms, whether external platforms or a platform for their own company. In this type of situation, power is very discreet, but control is in fact widespread. This may explain the reluctance of employees to provide their data to their company. This reluctance is often supported by employee representatives (Box 3.9).

Finally, the appropriation by employees and their representatives of HR quantification tools is limited by a form of passive resistance: employees do not actively contribute to the production of data sets on themselves, which would allow the development of new services or new uses of quantification. Despite its apparent discrepancy with individual behavior on the Internet, which most often consists of providing large amounts of data without worrying too much about it, this passivity can easily be explained, among other things, by the lack of visibility of the services to be expected and by a form of mistrust linked to possible uses of data for control and discipline purposes.

3.2.2. Can numbers be made to reflect whatever we like?

The appropriation by employees and their representatives of HR quantification is also confronted with a relatively common discourse, based on the idea that figures can be “made to say anything and everything”. This discourse, which refers to a form of instrumentalization of quantification, is therefore the opposite of the discourse on its objectivity and rigor, but it is also part of the “fantasy of quantification”. It then encourages the consideration of ways to ensure a form of rigor in interpreting the figures.

3.2.2.1. The other side of the myth of objective quantification: the myth of instrumentalized quantification

While the data are seen as objective and neutral in reflecting reality, their interpretation is sometimes subject to virulent criticism, from both specialists and neophytes. Thus, many experts denounce common misconceptions such as the confusion between correlation and causality, or the reductionism involved in trying to represent reality by means of a single variable (Gould 1997). On the neophyte side, several attitudes can be identified, on a continuum ranging from a form of passive acceptance of interpretations, under the effect of a kind of amazement by the numbers (see Chapter 2, Box 2.7, for example), to a virulent criticism expressing the idea that the same number can be interpreted in several ways. This criticism is partly justified, particularly when interpretations are not formulated precisely enough (Box 3.10).

It is therefore relatively tempting to contrast the stage of setting the world in data, seen as an objective and a guarantee of rigor, and the stage of interpreting quantified results, seen as potentially biased and prone to being instrumentalized toward certain ends. For example, several studies have examined the rhetorical instrumentalization of numbers and statistics in political or public discourse (Gould 1997; Espeland and Stevens 2008; Obradovic and Beck 2012). This instrumentalization may involve selecting the numbers, methods and results on which the interpretation is based, or even reducing interpretation and purpose to a single key figure, formulating deliberately vague interpretations, or introducing a significant leap between the mathematical meaning of the figure and what is said about it (e.g. from correlation to causality).

However, an attempt has already been made in Chapter 2 to deconstruct the myth of objective data setting; here an attempt can be made to deconstruct the myth of instrumentalized quantification, by highlighting certain conditions aimed at limiting this instrumentalization.

3.2.2.2. How do we limit the instrumentalization of quantification?

The first condition, set out by Salais (2016), is to not forget the constructed nature of quantification. This includes recognizing the role of prejudice, social conventions and human bias in the implementation of worldwide statistics and the interpretation of numbers. This is then intended to limit the sideration effect linked in part to the myth of quantification.

The second condition refers to a number of statistical and scientific rules. Thus, it is important to avoid measuring percentages on very small samples. On these small samples, a percentage can vary considerably: for example, going from three to four women in a management committee of 10 people corresponds to an increase of 10 points if one reasons in percentages. Similarly, it is preferable to limit the interpretative leap between what exactly the number measures and the interpretation made of it (see Box 3.10). Thus, it is tempting to interpret correlations between employee engagement and company performance as causalities, or changes in the percentages of employees engaged as direct effects of policies put in place, but few methods really make it possible to confirm this type of causality – and certainly not a simple correlation calculation. Finally, communicating about the whole process, such as the choice of indicators and methodology, seems necessary to make it possible to discuss these choices.

The creation of discussion forums on numerical interpretations is a third condition that seems essential in order to limit the possibilities of instrumentalizing quantification instruments. Indeed, quantification is no more instrumentalizable than other types of evidence gathering or other scientific approaches. However, it may sometimes leave less room for criticism because of the impression that a greater technical and scientific background is needed to understand it and therefore possibly challenge it. However, in companies, it seems crucial that there are forums for discussion around HR quantification, made up of trained individuals and professionals in this subject. The role of employee representatives, for example, seems to be very important on this subject. It seems possible for companies to organize or finance training on data analysis or on mastering quantitative methods, in order to spread the possibility of a balanced and informed debate on figures and their interpretation. This point will be returned to in Chapter 5.

Finally, the employees’ relationship to HR quantification processes is first structured by a form of distrust in the collection and processing of data and the uses and interpretations of figures that can be made by the company. This mistrust is reflected in particular by not engaging in the voluntary provision of personal data to the company, a passive resistance that contrasts with the lightness with which individuals entrust their data to external digital actors (Google, Facebook, LinkedIn, etc.), and which limits the company’s possibilities for data collection and therefore, ultimately, quantification.

3.3. Distrust of a disembodied decision

This relationship is also structured by a distrust of decision-making based essentially on figures, which is becoming somewhat disembodied. In Chapter 2, the links between quantification and decision-making were explored (objectivity, personalization, prediction). In this section, the focus is on employees’ and their representatives’ perception of decisions based on quantification. By using numbers to make decisions, a human being resists a form of responsibility and does not really make the decision themselves. In addition, cases where decisions are made without human mediation are becoming more and more common. The notions of responsibility for decision-making and employee empowerment seem crucial to understanding this distrust.

3.3.1. Decisions made solely on the basis of figures

Situations where a human being makes a decision based almost exclusively on figures have become relatively common, particularly with regard to remuneration or promotion decisions in companies. However, these situations have two characteristics that can be criticized by employees and their representatives. First of all, the employee’s voice becomes inaudible and is often not taken into account. Second, the use of an often standardized quantification makes it more difficult to take into account particular and individual circumstances.

3.3.1.1. Has the employee been silenced?

Many examples can be given of decisions taken on the basis of figures in organizations, from collective decisions (such as the number of redundancies under a restructuring plan) to individual decisions (such as individual raises or promotions in companies that have highly standardized their processes). The measures used for these decisions may also vary, from economic data (as part of restructuring plans) to aptitude tests, or individual activity and performance indicators (e.g. figures about sales made by a seller). A recommendation is made to the reader to refer back to Chapter 1 (section 1.1) for more specific examples.

Here, the focus is on the consequences for the employee of this form of decision-making. The first consequence refers to the limited consideration given to the employee’s voice. Thus, when the decision is based on figures, the employee has few means of contesting it, and any individual claims cannot be taken into account. Indeed, if the process allows for decision-making based essentially on figures in order to be able to value its objectivity and justice, as seen in Chapter 2, leaving the possibility of individual adjustments undermines this image of objectivity and justice (Box 3.11). Pichault and Nizet (2000) point out that the quantified standards aim, among other things, to limit managerial and interpersonal arbitrariness.

However, several studies have shown the importance of giving individuals the opportunity to express their opinions on decisions that concern them. Thus, Marchal (2015) opposes, in the context of recruitment, the “distance” selection, which does not allow the candidate to express themself during an exchange with the recruiter. The selection is more based on the expression of individual characteristics.

In the context of the evaluation, the School of Human Relations highlighted the importance of organizing an interpersonal exchange between the person being evaluated (the employee, for example) and the person assessing (generally, their manager), enabling the employee to shed light on, or even discuss, the decisions taken, but also providing a privileged opportunity for listening and communication. Indeed, the supporters of this school insist on the importance of the human factor and interpersonal relationships in the productivity and commitment of individuals. Therefore, rather than focusing on an unattainable ideal of objective and fair evaluation, they recommend using evaluation as a means of creating a space for exchange and discussion. For its part, the current of organizational justice also strongly values the importance of hearing the employee’s voice. For example, a review of the literature on evaluation processes shows that employees are more satisfied, consider the process fairer and are more motivated when they feel they can express themselves (Cawley et al. 1998; Cropanzano et al. 2007). Cawley et al. (1998) identified five ways to encourage employee expression in relation to evaluation: the opportunity to express an opinion on the evaluation process and outcome, the opportunity to influence the outcome through this expression, the opportunity to self-assess, the opportunity to participate in the development of the evaluation system, and the opportunity to participate in setting objectives.

Finally, the critical currents on evaluation also highlight the fact that it can be a vector of employee domination (Gilbert and Yalenios 2017). In fact, evaluation can be defined as a constraint imposed on employees. However, this constraint can be perceived as more alienating and enslaving when the employees do not have the opportunity to express themselves. As a result, employees and their representatives may show a certain distrust of systems that leave no room for the employee’s voice.

Moreover, silencing employees prevents the always particular and individual circumstances of the exercise of work from being taken into account.

3.3.1.2. The difficulty in taking into account particular circumstances

Indeed, the second consequence refers to the poor consideration of particular or individual circumstances. Adjusting a decision based on figures with an ideal of objectivity and justice to individual characteristics is indeed a threat to that ideal. However, individual performance is almost always influenced by the particular circumstances, personal or professional, in which the work is performed (Box 3.12).

The impossibility of taking into account these particular circumstances, characteristic of the use of quantified tools (Pichault and Nizet 2000), may ultimately undermine the ideal of justice that is supposed to be guaranteed by the use of quantification. The Aristotelian distinction between equality and equity allows us to understand this phenomenon. Indeed, Aristotle presents equity as the possibility of taking into account particular circumstances. Other authors have since made a more precise distinction between equity, equality and the consideration of needs. According to Cropanzano et al. (2007), equity refers to assessing and rewarding employees according to their respective contributions, equality refers to rewarding all employees in the same way (the principle of general raises without taking into account individual performance, for example) and consideration of needs refers to assessment and reward according to individual needs (such as proposals for adapted development plans).

Decision-making based essentially on quantification corresponds to a combination of equality and equity (if the respective contributions of individuals are taken into account), but does not allow individual needs to be taken into account. If the ideal of justice is closer to the consideration of needs than to equality or equity, then this type of decision-making may not achieve that ideal. Therefore, workers and their representatives may oppose recruitment, selection or evaluation systems based solely on quantified indicators.

3.3.1.3. Decision-making without accountability

Decision-making based essentially on quantified indicators tends to remove the responsibility of the person who is supposed to embody the decision (Marchal and Bureau 2009). Thus, the multiplication of quantified tools allows the evaluator to relieve themself of the burden of judgment, and to thus disengage themself from it. For example, a manager may communicate a decision on an individual raise or promotion to an employee in their team, while blaming the figures for this decision. This disengagement therefore corresponds to a certain lack of responsibility on the part of decision-makers, which ultimately corresponds to a form of disembodied decision-making. However, this phenomenon may be poorly perceived by employees, who may see it as a sign of disengagement on the part of their manager and who may also suffer from the impossibility of attributing the decision taken to a person who is responsible for it.

This depersonalization, disembodiment or unaccountability of decision-making finds its extreme form in situations where decisions are only made by algorithms, without the mediation of a human being to embody them and to explain them to employees.

3.3.2. Decisions made solely by algorithms

The notion of “algorithmic management” has already been mentioned. It refers to situations where the role of the manager, for example in the allocation or evaluation of work, is entirely assigned to an algorithm. These situations raise two major questions. First, that of responsibility: who is responsible for the decision made by the algorithm? Second, these situations question the possibility for employees to maintain room for maneuver and autonomy with regard to the algorithm, which refers to the notion of empowerment.

3.3.2.1. The question of liability

The many current debates on autonomous cars and other robots that must take decisions in place of human beings underline the importance of the notion of responsibility. For example, research has shown that, even if autonomous cars result in fewer accidents than humans, these accidents would be less “accepted” by individuals, as liability could not be clearly defined (Hevelke and Nida-Rümelin 2015; CNIL 2017).

Indeed, the machine probably cannot be held responsible for the decision. However, the human beings who participated in the decision production chain are extremely numerous. They include, among others:

  • – the management of the company that decided to produce and then market the machine;
  • – data experts (Beer 2019) who have developed the computer codes necessary for the proper functioning of the machine;
  • – the testers (internal and external to the company) who decided, following the tests that were carried out, that the machine could be marketed safely;
  • – experts mandated by commissions, who have authorized its placing on the market;
  • – the users and owners of the machine.

The length of this chain of responsibilities clearly shows the impossibility of attributing a decision taken by a machine to a particular human being, or even a group of human beings. Moreover, algorithms now learn from data sets, which raises the question of liability in a different way. Thus, the conversational robot Tay, put online in 2016 by Microsoft, had to be suspended after only 24 hours, when confronted with data from the users of social networks, it began to make racist and sexist comments (CNIL 2017).

Moreover, how can the quality of a decision taken by a machine alone be measured? Should this quality be measured against the human decision, considering that the human decision is the “right” decision and that the machine must adjust to it? But in this case, how can we take into account the variability of human decision-making (Box 3.13)? Or should we consider that the interest of the machine is precisely to make better decisions? But how can we know if the machine has made a better decision than a human being?

Cardon (2018) raises the question of the responsibility of algorithms from another angle, that of power. He insists on the personification of algorithms in current discourses, which attribute to them a form of responsibility and power in the organization of information and social life. And, in fact, this personification also finds its source in the growing autonomy of algorithms in relation to their designers6. This empowerment also stems from the fact that, by their construction, an algorithm must transform substantial rules into procedural rules. Indeed, a human being or a company may want to develop an algorithm that will suggest the most appropriate content for each individual (substantial rule). The algorithm, which has no symbolic understanding of this rule nor the data it manipulates, must transform this substantial rule into a procedural rule, i.e. into procedures for calculating and coding information that will best approximate people’s tastes. Collaborative filtering, which consists of approaching individuals’ tastes by their similarity in history with other individuals, illustrates this transition from the substantial rule to the procedural rule. Finally, it is also this passage that contributes to a form of empowerment of algorithms, in that they do not obey the principles of rationality and human modes of understanding.

According to Cardon, many voices are calling for a guarantee of “neutrality” on the part of algorithms. However, an algorithm cannot by definition be neutral, since its essential purpose is to select, order, sort, filter and classify information. Cardon proposes replacing the imperative of neutrality with an imperative of loyalty. In other words, platforms using algorithms must clearly explain what they do, how they are built, what criteria they use for rating or filtering, etc. In companies, this rule is just as important, and its application is demanded by employee representatives (e.g. the CFE-CGC in France in the “Ethics & Digital HR” charter). They also highlight the fact that the more complex an algorithm is, the more difficult it will be to explain. There is therefore an argument in favor of mobilizing simpler, and therefore more explainable, algorithms.

The question of responsibility for the decision taken by the algorithm remains unresolved, and several answers can be provided by experts, States and even the international community, from the responsibility of the companies that produce and use the algorithms to the individuals who own the machines based on the algorithms.

In companies, this issue is sensitive and important. Indeed, many HR decisions can have a significant impact on the professional and personal future of individuals. Therefore, to know who can be held responsible for these decisions, for example in the event of a dispute or litigation, seems necessary. For the time being, in the countries of the European Union, the General Data Protection Regulation 2016/679 explains that individuals have the right to not be the subject of a decision based exclusively on automated processing that produces effects that significantly affect them. This currently limits the possibility of fully automating processes such as CV preselection or promotions. But this rule does not exist in other countries and, moreover, its application in the European Union may raise questions, insofar as a company can always add a human intermediary, which will give the impression that the rule is respected, whereas it will not be if the intermediary simply blindly follows the instructions of the algorithm.

3.3.2.2. Algorithms perceived as black boxes: an impossible empowerment?

The notion of loyalty is based on the idea that it is necessary to be able to explain exactly what algorithms do. In fact, it seems necessary to explain how algorithms work in order to guarantee a form of employee empowerment. The notion of empowerment gives rise to very varied definitions, particularly when it comes to workers. However, this notion is based on the idea of giving power to employees, and thus contributing to a redistribution of power within the company (Greasley et al. 2005). The literature on the subject often focuses on employee empowerment vis-à-vis their manager, but the notion of empowerment seems to also be applicable in the context of the relationship between employees and algorithms. Thus, Christin (2017) refers to cases where workers manage to “play with the algorithm”7 because they understand and are able to master its operating rules. For example, journalists whose performance is partly measured by the e-reputation of their articles use titles that are particularly attractive in terms of the number of clicks, but which do not necessarily reflect the content of the article, or ask that their article be positioned at the top of the page at times when there is more traffic on the Internet, which then increases their e-reputation as measured by the algorithm. This type of manipulation with algorithms or quantification refers to a form of “reactivity” (Espeland and Sauder 2007) characteristic of a worker taking power over the algorithm.

However, this is only possible to the extent that the worker understands how the algorithm works, what data and calculation rules it uses. Yet, algorithms sometimes remain “black boxes” (Christin 2017) whose mode of operation remains incomprehensible. This type of situation then seems to be the opposite of the idea that quantification provides a form of transparency (Espeland and Stevens 2008; Hansen and Flyverbom 2015). Once again, this underlines the changes brought about by the increasing mobilization of algorithms in the world of quantification.

Finally, employees and their representatives may therefore show a certain level of mistrust of the company’s intentions when collecting and processing data, and of a form of disembodied decision, whose responsibility may be difficult to establish. This may limit their appropriation and acceptance of the quantification tools used in HR.

This chapter focused on the appropriation of HR quantification tools by the company’s stakeholders. It very schematically proposed an analytical distinction between management and HR, on the one hand, and employees and their representatives, on the other hand. While management and the HR function may see quantification as a rationalization tool, which advocates its dissemination, employees and their representatives may be reluctant to do so, when providing their data to their company, for example. Indeed, they may see quantification as a threat to the quality of decisions taken and to their room for autonomy. The HR function is then encouraged to develop strategies to reduce this resistance. It can thus seek to highlight the contributions of quantification for individuals, building on the arguments outlined in the previous chapter: a guarantee of objectivity, the possibility of providing new personalized services to employees, the possibility of entering a more proactive and less reactive approach, etc.

  1. 1 https://www.domo.com/learn/data-never-sleeps-6 (accessed October 2019).
  2. 2 https://www.statista.com/statistics/241552/share-of-global-population-using-facebook-by-region/ (accessed October 2019).
  3. 3 Demographic distribution of Facebook members in the United States: https://www.statista.com/statistics/187041/us-user-age-distribution-on-facebook/ (accessed October 2019).
  4. 4 See especially https://www.employmentbuddy.com/HR-Blogs/Details/Fair-dismissal-following-historic-derogatory-comments-on-Facebook (accessed October 2019).
  5. 5 In particular: http://moralmachine.mit.edu/ and http://moralmachineresults.scalablecoop.org/ (accessed October 2019).
  6. 6 We will examine this in more detail in Chapter 5.
  7. 7 According to the definition of Espeland and Sauder (2007, p. 29): “We define “playing” as manipulating rules and numbers in ways that are unconnected to, or even undermine, the motivation behind them.”
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset