1
From the Statisticalization of Labor to Human Resources Algorithms: The Different Uses of Quantification

Quantification can be used in many HR processes, such as recruitment, evaluation and remuneration (with job classification, for example). In fact, human resources management gives rise to a very wide variety of uses of figures. The first use refers to decision-making concerning individuals (section 1.1), i.e. using quantified information to inform or justify decisions concerning specific individuals, for example, candidates for recruitment, employees for career management or remuneration. The second use corresponds to a general increase in the adoption of figures and their adoption at the collective level, no longer at the individual level (section 1.2). Historically, this use involved legal reporting and dashboards. It is therefore a question of defining relatively basic indicators and metrics, but aimed at monitoring or steering a situation (e.g. number of employees) or phenomenon (e.g. absenteeism). However, these basic indicators are not always sufficient, particularly because of the complexity of certain HR phenomena. The phenomenon of absenteeism can certainly be measured and monitored by basic bivariate indicators, but these will not be sufficient to identify the determinants of absenteeism, and therefore to define appropriate policies to reduce it. As a result, more sophisticated statistical methods have gradually been introduced in the HR field, both on the HR research side and on the business side: this approach is regularly referred to as “HR analytics”. More recently, the emergence of Big Data and the mobilization of algorithms in different sectors of society have gradually spread to the HR sphere, even if the notion of “Big Data HR” remains vague (section 1.3). This new horizon raises new questions and challenges for the HR function.

It should be stressed that the boundaries between these different uses are tenuous and shifting, and therefore this distinction remains partly arbitrary and personal. Thus, a dashboard can mobilize figures initially constructed with a view to decision-making about individuals. In addition, traditional reporting, which is particularly rich in cross-referencing, can be the beginning of a more sophisticated quantitative analysis, and produce similar results. Similarly, prediction and customization algorithms such as job or training recommendations, which we will classify under the category of Big Data and algorithms, are essentially based on statistical analysis tools (correlation, linear or logistic regression, etc.).

However, this chapter will focus on defining the outlines of these three types of uses, using definitions and examples.

1.1. Quantifying reality: quantifying individuals or positions

The HR function is regularly confronted with the need to make decisions about individuals: recruitment, promotion, remuneration, etc. However, under the joint pressure of ethical and legal issues, particularly around non-discrimination, it is also motivated to back these decisions up as much as possible in order to justify their legitimacy. One response to this search for justification is to mobilize quantified assessments of individuals or work (Bruno 2015). These statisticalization operations of the concrete world (Juven 2016) or commensuration (Espeland and Stevens 1998) aim to both inform decisions and justify them.

1.1.1. The statisticalization of individuals and work

To report on these operations, the focus here is on two types of activities. The first concerns the quantification of individuals and refers to, among other things, tools proposed by the psychotechnical approach briefly described in the introduction. The second refers to the quantification of work, necessary, for example, to classify jobs and thus make decisions related to remuneration, but which raises just as many questions because of the particular nature of the “work commodity” (Vatin 2013).

1.1.1.1. Different tools for the quantified assessment of individuals

Faced with the need to make decisions at an individual level (which candidate to recruit, which employee to promote, etc.), the HR function has had to take advantage of different types of quantified evaluation tools (Boussard 2009). Some tools are, in fact, partly the result of psychotechnical work, but HR agents do not necessarily master the epistemology of this approach. The tools are often used without real awareness of the underlying methodological assumptions. The use of quantified HR assessment tools has been relatively progressive, and two main factors have promoted it (Dujarier 2010). First of all, the transition to a market economy was accompanied by a division of labor and a generalization of employment, which required reflection on the formation and justification of pay levels and differences in pay levels within the same company. Secondly, the practices of selecting and assigning individuals within this division of labor have stimulated the quantified assessment of individuals.

Several examples of this are given here and highlight the uses made by the HR function, but also the criticisms resulting from this. However, in this chapter we do not insist on possible biases and therefore on questioning the notion of objectivity, because this will be the subject of section 2.1.

Psychological testing is a first example of a quantified assessment tool. Its use is frequent in the case of recruitment, and it can have several objectives. First, it can aim to match a candidate’s values with the company’s values. In this case, the test is based on the values and behaviors of the individual. Then, it may aim to match the personality of a candidate with what is generally sought by the company. In this case, the test includes questions that focus on behavior in the event of stress, uncertainty and conflict, for example. Finally, it may aim to match the personality of a candidate with the psychological composition of the team in which a position is to be filled. This variety of uses underlines the fact that the implementation and use of this type of test require upstream reflection in order to provide answers to the following questions: What are we trying to measure? What is the purpose of this measurement?

Once these questions have been answered, the second step is to answer the question: how do we measure what we are trying to measure? To this end, the academic and managerial literature provides many scales for measuring different characteristics and different attributes of individuals. Finally, after passing the test, a final reflection must be carried out on how to use it: to classify individuals, as a support point for the recruitment interview, and as a decision-making aid. A characteristic of these tests is that they can lead to a ranking of individuals in different profiles that are not necessarily ranked hierarchically. Thus, a test on one’s relationship to authority may lead to a classification of different types of relationship (submission, rebellion, negotiation, etc.) without one of these relationships necessarily being unanimously considered as preferable to the others. The preference for one type of profile over the others may depend, for example, on the sector of activity or type of company: a recruitment in the army will probably place a higher value on an obedient profile, unlike recruitment in a start-up or in a company with a flatter hierarchy, for example. Psychological tests are still widely used in recruitment today, although their format and administration methods may have changed (Box 1.1).

The aptitude, competence or intelligence test is a second tool that is often used, in the context of recruitment, for example. Although the distinction between aptitude, competence and intelligence remains relevant, it is necessary to place these tests in the same category here, because they are used to measure a characteristic of the individual considered useful and relevant for success in a given position. In addition, unlike psychological tests, aptitude, competence or intelligence tests are most often used to rank individuals on a one-dimensional scale. However, as with psychological tests, aptitude, competence or intelligence tests require upstream reflection, in this case on the competencies or skills required for successful performance in the role (Marchal 2015). Although theories such as the g factor or measures such as IQ outlined in the introduction assume that a single competency measure predicts or evaluates a set of interdisciplinary competencies, most aptitude and competency tests are designed to correspond to a specific position. However, the division of a position into skills or aptitudes is not without its difficulties (Box 1.2).

A third tool, used in particular to decide on promotions, is the quantified evaluation by the manager or other stakeholders with a grid of criteria. This tool is therefore based on a direct assessment by a third party, but generally the definition of fairly precise criteria seeks to limit the intervention of this third party and the intrusion of their subjectivity into what is supposed to constitute an objective and fair assessment (Erdogan 2002, Cropanzano et al. 2007). Two scenarios can be discerned according to the number and status of the people who assess a situation where the workers are assessed by their manager and a situation where they are assessed by all the clients with whom they come into contact. Evaluation by the manager is an extremely common situation in organizations (Gilbert and Yalenios 2017). However, this situation also varies greatly depending on the organizational context: the degree of formalization, frequency, criteria and use may differ. In terms of formalization, there are companies where the manager conducts an assessment interview with their subordinate without a prior grid, and others where the manager must complete an extremely precise grid on their subordinate, sometimes without this giving rise to an exchange with the person being assessed. In terms of frequency, some companies request annual assessments and others semiannual. In terms of criteria, situations where the criteria focus more on the achievement of objectives should be dissociated from situations where they are interested in the implementation of behaviors. Finally, in terms of use, some companies may take managerial evaluation into account as part of the remuneration process, others in promotion, others in development, etc. (Boswell and Boudreau 2000).

It should also be recalled that evaluation methods have evolved over time (Gilbert and Yalenios 2017). Thus, the Taylorism of the first half of the 20th Century gave rise to a desire to rate workers on precise criteria and relating to their activity and the achievement of objectives, while the school of human relations in the second half of the 20th Century valued dialogue, and thus the implementation of appraisal interviews aimed both at evaluating and establishing an exchange between managers and subordinates (Cropanzano et al. 2007). Evaluation by third parties, and, in particular, clients, is a very different but increasingly common situation (Havard 2008), particularly in professions involving contact with third parties (Box 1.3).

The measurement of the achievement of objectives is a fourth tool used, in particular, for remuneration decisions (e.g. allocation of raises or variable portions). This approach, which is part of objective-based management, popularized by the American consultant Peter Drucker, does not focus on the resources implemented by the workers, but on the results achieved (Gilbert and Yalenios 2017). It requires each individual to define the objectives they must achieve, and the criteria for measuring their achievement. These two operations are less obvious than they seem at first glance. Thus, for a sales profession, the first reflex would be to evaluate the salesman according to the turnover or the number of sales made. However, a criterion of this type would strongly reward a seller who made very large sales, but with a significant number of returns from customers dissatisfied with their purchase, even though this situation would be more damaging to the company than one where a seller made a smaller number of sales, but with fewer customer returns. Moreover, depending on the positions considered, measuring the achievement of objectives can be an easy or more difficult operation. How can one measure the achievement of objectives such as “carrying out a particular project” or “organizing a particular event in a satisfactory manner”? Finally, this results-based evaluation method does not take into account the hazards of work and the fact that achieving an objective requires the cooperation of several people or company functions in many professions.

Thus, the quantified evaluation of individuals is both a very common and variable practice within organizations. We have given four examples, in some cases exposing the criticisms they may have given rise to, but without dwelling on the question of potential bias, which will be dealt with in Chapter 2. However, this does not exhaust all the quantification operations carried out by the HR function. In particular, the quantified evaluation of positions is another important aspect of this type of operation.

1.1.1.2. Quantification of work and positions

The quantification of work and positions, which was an essential foundation of Taylorism, then continued as an essential element of job classification operations.

From the end of the 19th Century, in the context of the rise of industrialization and mass production, reflections were launched with a view to maximizing the productivity of companies. An American engineer, Taylor, developed a system of “scientific organization of work”, which he believed would provide maximum performance for a given set of resources (Taylor 1919). This system is based on three conditions. First of all, a detailed and rigorous analysis of work methods (workers’ actions, pace, cadence, etc.) makes it possible to identify the causes of lost productivity. The second step consists of defining very precisely the actions and tasks to be performed by each worker in order to achieve maximum productivity (which is termed the one best way, or best practice). Finally, the purpose of setting remuneration is to ensure greater objectivity and motivation for employees (Box 1.4).

It is interesting to return to the term “scientific organization of work”. Indeed, the adjective “scientific” is justified by, among other things, the use of the measurement of work: “In most trades, the science is developed through a comparatively simple analysis and time study of the movements required by the workmen to do some small part of his work, and this study is usually made by a man equipped merely with a stop-watch and a properly ruled notebook” (Taylor 1919, p. 117). Indeed, the measurement of work is at the heart of Taylorism: measurement of the time required to perform each task, productivity gains, average and maximum worker productivity, pay gains that can be proposed, etc.

Taylorism was implemented at the beginning of the 20th Century, but has given rise to many criticisms, not all of which will be discussed here. Thus, the philosophers Weil (2002) and Linhart (1980) experienced factory work and described both the very high physical difficulty of this work and its alienating dimensions. In another vein, sociologists Crozier and Friedberg (1996) show that it is illusory to want to remove all individual margins of maneuver: individuals will always find a way to find spaces of freedom, thus recreating forms of uncertainty. Finally, the evolution of work in developed countries, and, in particular, the tertiarization and reduction of factory work, has reduced the relevance of Taylorism, which seems more suitable for low-skilled jobs involving repetitive tasks.

These criticisms and limitations have led to a gradual decrease in the use of Taylorism as the main method of management and work organization, but the measurement of work has not been abandoned. Indeed, in the second half of the 20th Century, the desire of the State to set pay in order to avoid inflation, followed by the need to justify pay hierarchies and therefore to define appropriate pay for each position, led to large-scale job classification operations in many countries. However, the classification or weighing up of positions is also based on the quantification of each position. Whatever method is used (see Box 1.5 on the Hay method, probably one of the best known), this consists of evaluating each position on a list of criteria, and aggregating these criteria according to an ad hoc formula. This makes it possible to associate an index to each position, and thus to prioritize them, then to match an index to a salary level.

Once again, the use of quantification corresponds to an objective of rigor and objectivity. Yet, as will be discussed in Chapter 2, work has highlighted the many potential biases of these job classification operations (Acker 1989, Lemière and Silvera 2010).

Finally, Taylorism and job classification are largely based on operations of work quantification. They are used to support HR decision making, particularly with regard to work organization and remuneration. Even if these quantification operations do not concern individuals but rather positions, they ultimately lead, as the operations mentioned in the previous section do, to decision-making that can have an impact on individuals.

1.1.2. Informing and justifying decisions concerning individuals

The operations of quantifying individuals or work described in section 1.1.1 have several objectives. In this section, a return will be made to the three main purposes: to make individuals or objects (position, activity) comparable, in particular by classifying them, and to justify the decisions made.

1.1.2.1. Enabling comparison by quantification: commensurability and classification

In the case of the measurement of individuals as well as the measurement of activities or positions, it is a question of making them comparable. Thus the tests used to recruit aim to make individuals comparable so as to select a few of them from a larger number, and the weighing up of positions is to make jobs comparable so as to rank them and thus define an appropriate salary scale. Finally, these operations consist of reducing the wide variety of information available on an individual or a position in order to represent them all on a single classification dimension, a single scale.

Espeland and Stevens (1998) refer to this process as “commensuration”. According to them, commensuration corresponds to the comparison of different entities using a common metric (in our examples, a score for a test, or an index in the job classification). As a result, commensuration has several characteristics. First, it transforms qualities into quantities, and thus reduces information, which tends to simplify information processing and ultimately decision-making processes. Commensuration also corresponds to a particular form of standardization, since it seeks to bring objects into a common format (the common metric). Unlike other forms of quantification, commensuration is fundamentally relative and not absolute: it allows comparison between entities, and has little value outside this comparison objective.

However, the authors also point out that commensuration processes can take a variety of forms. A first factor of variation refers to the level of technical development. Thus, the cost–benefit analysis developed by the government and economists is based on a particularly technical tool. At the other end of the spectrum, they give the example of more empirical estimates of feminists seeking to measure the time women spend on domestic tasks. A second factor of variation refers to the degree of explanation and ultimately institutionalization of the commensuration. Some commensuration operations are so institutionalized that they help to define what they measure (e.g. unemployment rate and poverty rate) and influence the behavior of agents, even when they are criticized. Espeland and Stevens give the example of academic institutions that encourage their researchers to comply with international ranking criteria while regularly questioning these criteria. Other commensuration operations remain poorly disseminated and therefore have little effect on the definition of the objects they measure and on the behavior of the actors. Finally, the third factor of variation is based on the agents of commensuration, from quantification experts to “lambdas” individuals, including interest groups, for example.

We can use these three variation factors to qualify the commensuration operations discussed in the previous section (Table 1.1).

Table 1.1. The characteristics of HR commensuration

Factor of variation Application to HR commensuration
Technological complexity High degree of technological complexity
Institutionalization High degree of institutionalization
Stakeholders involved Experts Managers Trade unions, collectives

In HR, the level of technical development is high for most of the examples given. Thus, psychological or aptitude tests, the measurement of work in Taylorism and job classification are based on complex tools and are sometimes even backed by substantial theoretical corpuses. The rating by the manager or client can also be very well equipped when it gives rise to a grid of precise criteria, designed to reduce managerial arbitrariness. The institutionalization of commensuration operations is also important. Indeed, in all the cases studied, commensuration is explicitly used to act on reality since it is mobilized to make decisions. Finally, the actors concerned are more variable, from experts (occupational psychologists for psychological tests, for example) to trade unions or employee collectives (who are involved, for example, in the implementation of the Hay method).

Espeland and Stevens also point out the consequences of these processes. For example, commensuration can make certain aspects invisible by selecting the information to be included in the comparison. In HR, this is the case for aptitude tests that measure certain skills or competencies to the detriment of others, or for job classification operations that only take into account aspects of the work that meet predefined criteria. However, commensuration can also highlight certain aspects. Thus, the two authors give the example of feminist movements that have sought to measure the value of domestic work in order to integrate into the gender pay gap the inequalities related to the gendered distribution of unpaid domestic work. In HR, to give just one example, the implementation of Taylorism relies on, among other things, making the sources of decreased productivity visible (workers who walk fast when they have unloaded their pig iron but who must therefore take additional breaks for muscular recovery, for example). Finally, Espeland and Stevens are also interested in the process of commensuration as a social process: how to build an agreement on the common metric, how to make it accepted and how to use it in decision-making. In particular, they show the role of institutions and experts in this process. In HR, in the same way, it is crucial to be able to mobilize metrics that are acceptable to all stakeholders, including managers, employees and employee representatives. To promote this acceptability, HR can mobilize the work of experts or rely on participatory approaches to involve the various stakeholders (employees and managers, for example) in order to limit the possibilities of contestation.

Commensuration can sometimes take the form of a classification. Out of interest, this is a human classification, in the sense that it refers to human beings or their activities (Hacking 2005). Classification processes are confronted with a realistic point of view, which considers that the classes exist outside the human beings who define them, and a nominalist point of view, which considers that only the human being is responsible for grouping into classes. The nominalist point of view raises the question of how the classes are constructed and then used. Hacking (2001) highlights the elements necessary for this analysis. First of all, it is necessary to analyze the criteria used to define the classes and who belongs to which class: for example, weight and height to calculate the body mass index needed to define obesity. In HR, the level of diploma or position held represents the criteria necessary to define who is an executive or non-executive. Second, the human beings and behaviors that are the subject of the classification may vary. Thus, in HR, the different classifications can relate to positions (professional category), individuals (“talent”, “high potential”, etc.), behaviors (such as “committed employees”), etc. Classification is also carried out by institutions. Hacking gives the example of diseases, whose classification is institutionalized by doctors, health insurance systems, and professional journals, among others. In HR, the institutions that contribute to the definition and sustainability of a classification include the social partners, managers, management and payroll systems. Finally, a classification also gives rise to (and in return is maintained by) knowledge and beliefs: in HR, for example, it concerns knowledge and beliefs about the behavior of managers in relation to non-managers, of committed employees in relation to those less committed, etc.

1.1.2.2. Justifying decisions

The use of quantification in the cases mentioned also responds to the challenge of justifying the decisions taken: quantification is seen as providing guarantees of neutrality and objectivity (Bruno 2015). Chapter 2 will return to the link between objectivity and quantification and the factors that challenge this link. Here, the desire is simply to highlight the existence of strong incentives to mobilize quantification in cases where neutrality requirements are formulated.

In the United States, the Civil Right Acts of 1964 and the Equal Employment Opportunity Commission, established in 1970, have strongly encouraged companies to standardize their individual assessment systems, both in recruitment and career management (Dobbin 2009). In many cases, this standardization has involved, among other things, the use of quantification. This has several advantages, as recalled by Dobbin. First of all, it seems to reduce bias by reducing managerial arbitrariness. Secondly, it offers the possibility of building a body of evidence to support decision-making in the event of litigation and legal remedies (see Box 1.6 on legal remedies related to the use of tests). It also facilitates the production of reports (requested by the Equal Employment Opportunity Commission). Finally, it contributes to strengthening the legitimacy of the HR function’s activity.

Finally, the use of the quantification of individuals or positions to support decisions that affect individuals is common in HR, and corresponds to the ultimate aim of commensurating and justifying decisions made (using the argument of neutrality).

1.2. From reporting to HR data analysis

Beyond this individual dimension, the HR function also has to regularly make decisions at the collective level: the definition of HR policies, decisions concerning collective raises, strategic HR decisions, etc. This explains why, in addition to this first use of quantification, there are other uses that allow for greater generality at the organizational level. Reporting and dashboards illustrate this approach well. However, more recently, the emergence of HR analytics has brought new dimensions to this approach.

1.2.1. HR reports and dashboards: definitions and examples

Since the second half of the 20th Century, in most western countries companies have had to publish figures on their workforce, practices and characteristics. However, this legal reporting, which, in some cases, may be supplemented by reports resulting from negotiations with unions, is not always used by companies. Several obstacles to its use can be identified, in particular the fact that the figures required by the legal framework are not always those that would be most relevant according to the context of the companies in question. Some companies then voluntarily produce additional indicators or metrics, defined according to a given situation and need. For example, a company that identifies a gradual increase in turnover and considers it a problem could define figures to quantify and monitor this phenomenon over time. This is therefore part of an “HR dashboard” approach. In both cases, it is a descriptive approach to measure phenomena that fall within the field of HR.

1.2.1.1. Legal reporting

The legal obligations to produce HR indicators in France and other European countries have multiplied since the 1970s (see Box 1.7 for the example of France).

The importance of social reporting obligations in France is no exception. For example, the European Union adopted a directive about non-financial reporting in 2014. This directive requires large companies to include non-financial information in their annual management reports, particularly with regard to personnel management and the diversity of governance bodies. This therefore creates social reporting obligations.

This legal reporting has several purposes. First, it encourages the company to produce figures on phenomena in the HR field, and thus to become aware of them. For example, one of the stated objectives of the 1983 law on the publication of the comparative situation report in France was to formalize and quantify inequalities between women and men in order to have their existence recognized by employers and unions. Similarly, the obligation imposed by the European directive to provide detailed information on diversity within governance bodies is intended to highlight the importance of this subject. Second, this reporting requires the company to provide information to its social partners (e.g. trade unions) in the HR field. Indeed, most of the above-mentioned reporting obligations concern the publication of figures but also their transmission to unions or even the establishment of a dialogue with the unions based on the figures. This takes into account both the role of trade unions in policy making and decision-making on collective HR issues, and the importance of indicators as the first diagnostic element of the situation. Finally, reporting allows intercompany comparison, at the national level but also in some international cases, by stabilizing the definition and calculation of indicators. Thus, in 1997, the creation of the Global Reporting Initiative (GRI) made it possible to establish a complete set of indicators on a wide range of subjects, particularly on social and HR-related themes (Box 1.8). The publication of a single standard for calculating indicators thus ensures the reliability of international comparisons.

In other cases, it is with a view to an audit or obtaining a label that a company works to calculate and provide quantified indicators in the HR field. For example, obtaining the GEEIS label (Gender Equality European & International Standard) requires providing figures on different dimensions of gender equality within the company. Similarly, the international certification standards related to working conditions, regularly used during social audits, are based, in part, on quantified information (Barraud de Lagerie 2013).

1.2.1.2. HR dashboards

Beyond these obligations, which depend on the legal contexts specific to each country, HR actors have strong incentives to produce indicators on the different themes that concern them, particularly for management purposes. These statistical measurements usually lead to results in the form of cross-tabulated statistics. Le Louarn (2008) combines this approach with the production of dashboards, which he believes are steering tools that can be used to guide decision or action.

Several examples can be provided: absenteeism monitoring, social climate surveys, recruitment process monitoring, etc. These examples can first be analyzed by following Desrosières’ (2008b) distinction between investigation and the administrative register. Desrosières applies this distinction in the context of official statistics, but it is also enlightening for the HR field. It makes it possible to distinguish between administrative data, produced by administrative forms, for example, and survey data, collected by questionnaires sent to the whole or a subset of the population. Thus, most administrative data are accessible in the HRIS (HR Information System), which has gradually grown considerably in companies (Bazin 2010; Laval and Guilloux 2010). Historically, the payroll process was the first to be computerized, requiring and enabling the computerized collection of individual employee data. Gradually, other processes have been faced with this computerization (Cercle SIRH 2017): time and activity management, recruitment, training, career management, etc. These data have the advantage of exhaustively covering the entire employee population of a company. They can be used, for example, to draw up a statistical portrait or a dashboard of absenteeism within a company (absence data are usually computerized, in particular because they can have an effect on remuneration) or to build the comparative situation report mentioned in the legal reports section above.

However, on some HR topics, HRIS data may be insufficient. For example, variables that could be useful in addressing a phenomenon such as employee engagement are not very available in the HRIS. As a result, companies that wish to measure this phenomenon most often use employee surveys. These surveys generally take the form of online or face-to-face questionnaires, to which an anonymized sample of employees respond. The company has two options: mobilize a standard survey, the questions of which are predefined by the organization selling the survey (such as the Gallup survey, see Box 1.9), or construct a specific questionnaire. The first option has the advantage of facilitating comparison with other companies, while the second allows for a better consideration of the context of the company concerned. However, the second also requires in-depth reflection on what the company is trying to measure, given the variety of concepts related to engagement: job satisfaction, organizational commitment etc. In addition, companies must also define the temporality and frequency of their engagement survey. Are they an annual or biannual survey, or much more frequent and shorter surveys, sent out weekly, for example, on specific topics, so as to take the pulse (hence the name of these surveys: pulse surveys) of the population? Thus, recently, startups (Supermood and Jubiwee) have developed offers dedicated to measuring engagement or quality of life at work based on very short questionnaires, called “micro-surveys”, and sent regularly, at a weekly rate, for example (Barabel et al. 2018).

However, the distinction made by Desrosières (2008b) is insufficient in the HR field because it does not cover all available data sources. Indeed, in addition to these administrative data (HRIS) and survey data, process performance data are also available, the collection of which is now made possible by the increasing computerization of these processes. For example, most companies have now computerized their recruitment process, in the sense that they use a software or platform dedicated to this process. Yet these softwares or platforms produce a considerable amount of data, for example, on candidates, but also on the performance of the process. Therefore the company can collect information on the conversion between the number of clicks on the offer and the number of applications, the duration of each recruitment stage, the time required to complete the offers etc. All this information can be valuable in measuring, for example, the attractiveness of the company or the performance of the recruitment process. Proponents of the evidence-based management (EBM) approach recommend defining three types of indicators related to the performance of HR processes (Cossette et al. 2014, Marler and Boudreau 2017): efficiency indicators (example for recruitment: quantity of candidates, time required to recruit them, recruitment costs), effectiveness (measurement of the quality of successful candidates and their fit with the organization’s needs) and impact (measurement of the impact on the organization’s performance of successful recruitment). The purpose of these indicators is to improve the performance of HR processes.

In summary, HR dashboards gather data defined by the company in order to monitor an HR process or phenomenon. Louarn (2008) suggests classifying them into four types: operational dashboards, related to HR processes (recruitment, remuneration, training, evaluation, for example), HR results dashboards, related to employees (workforce, attitudes, behavior, for example), HR strategic dashboards, related to strategic HR management tools (recognition, skills, for example) and cost and revenue dashboards, related to HR expenditure and added value. This typology gives a good idea of the variety of HR topics that can be covered by dashboards.

1.2.1.3. Reporting and dashboards, characterized by a bivariate vision and an objective of compliance

Unlike quantification, allowing decision-making at the individual level, reporting and dashboards aim at informing decision-making at the collective level. Thus, the figures in the report are most often aggregated indicators at the level of the organization or entities. Most organizations also define rules to ensure that no figures are provided for groups of less than five people in order to ensure anonymity. However, two phenomena limit this role of reporting and dashboards: bivariate indicators generally remain insufficient to account for the complexity of certain HR phenomena, and situations where significant efforts are devoted to the production of reports or dashboards which are subsequently rarely or not used are relatively frequent.

First of all, both reporting and dashboards most often adopt a univariate or bivariate view of the phenomena they measure, in the sense that they present their measures and results in the form of cross-tabulated statistics: absenteeism crossed with gender, profession, level of responsibility, etc. The example given above of the comparative situation report is particularly illustrative of this approach, since companies are required to systematically produce gendered indicators (i.e. cross-tabulated with gender). Similarly, the standard GRI indicators are also bivariate.

Being able to cross two variables is a valuable tool, but it can also prove insufficient, particularly for understanding complex, multifactorial phenomena, i.e. those that refer to several factors, as is the case with many HR phenomena. The example of equal pay for women and men sheds particular light on this limitation (Box 1.10).

In addition, the production of indicators, particularly for reporting purposes, is often carried out under pressure from legal obligations or, more generally, from compliance aims, in the case of figures produced for audits or certifications, for example. These figures do not always lead to awareness or concrete actions. They are produced for a specific purpose (obtaining a label, complying with the law) and are not necessarily used outside it. This phenomenon is also found at the local level. The management of a department may ask an entity to produce figures; the entity may produce them, but this does not guarantee that it will use them to improve its understanding or decision-making. This is what we have observed within the French division of a large multinational company (Box 1.11).

On the other hand, in the case of dashboards, the objective stated by people who request their production is often to improve and inform decision-making. Le Louarn (2008) explains that HR dashboards should help HR management to make better decisions and manage their actions, and thus contribute to the achievement of objectives. However, this normative purpose overlooks the fact that there may be gaps within organizations, between discourse and practice, and between the design of a system and its use at the local level. Thus, like any management tool, a dashboard requires updating in use (Chiapello and Gilbert 2013). It is in this update that a gap can occur between the objectives stated by the designers of the tool and the concrete use of it. Box 1.11 provides an example of such a gap. Chiapello and Gilbert (2013) allow us to analyze it by recalling the limits of rational approaches that underlie beliefs in the power of management tools. The sociotechnical approach therefore affirms the need for joint optimization of management tools and social systems, and emphasizes that one dimension cannot be changed without acting on the other. As a result, the introduction of a new dashboard, for example, cannot be done without taking into account the potential resistance or deviations from the standard of use that will inevitably arise.

Finally, reporting and dashboards have two main limitations: their bivariate dimension, knowing that HR phenomena are often too complex to be understood by simply crossing variables, and their often incomplete use, at both central and local levels. However, the dashboard approach is explicitly part of an EBM approach: the goal set by the designers of this type of tool remains to improve decision-making and thus human resources management. Le Louarn’s (2008) statement is quite exemplary of this approach. It advocates the use of a “staircase model”, which would link HRM practices, HR results, organizational results and long-term business success. According to him, each of these dimensions can lead to one or more dashboards, and the links between these dimensions could be estimated through correlation measurements between indicators in the dashboards. Other authors also support this vision. Lawler et al. (2010) therefore explain that HR must develop their data collection and publication activity if they want to make the HR function a strategic player in the organization. However, Boudreau and Ramstad (2004) make a clear distinction between producing or publishing data and integrating these data into an analytical model that makes it possible to value them and, above all, to value the activity of the HR function.

1.2.2. HR analytics and statistical studies

HR analytics is part of this second approach. The literature review conducted by Marler and Boudreau (2017) allows us to precisely define what it is. While the reporting and dashboard approach aims to produce HR metrics, HR analytics uses statistical techniques to integrate these data into analytical models. The literature on the subject is still relatively young, but allows us to identify some examples of the implementation of HR analytics in order to determine its main characteristics.

1.2.2.1. HR analytics: definitions and examples

Marler and Boudreau (2017) mobilize the various articles published on HR analytics to define their main characteristics.

The first characteristic, identified, in particular, by Lawler et al. (2010), refers to the use of statistical and analytical methods on HR data. This first characteristic therefore emphasizes a methodological distinction between reporting and dashboards, on the one hand, and analytics, on the other hand. This distinction can be linked to the limitations of reporting and dashboards that have been highlighted, and, in particular, the fact that these are bivariate approaches, whereas many HR phenomena are multivariate.

The second characteristic, presented, in particular, by Rasmussen and Ulrich (2015), refers to the identification and measurement of causal relationships between different phenomena, HR and non-HR. The identification of these cause-and-effect relationships generally requires the use of relatively sophisticated statistical methods (e.g. “all other things being equal” reasoning), and, in any case, a multivariate approach.

The third characteristic refers HR analytics to a decision-making process. This characteristic is clearly oriented toward the EBM approach already mentioned. The idea put forward by the authors identified by Marler and Boudreau (2017) is based on the premise that HR data collection, combined with the use of sophisticated quantitative methods, provides evidence to improve management.

Finally, Marler and Boudreau conclude by proposing a definition of HR analytics:

“An HR practice enabled by information technology that uses descriptive, visual, and statistical analyses of data related to HR processes, human capital, organizational performance, and external economic benchmarks to establish business impact and enable data-driven decision-making” (Marler and Boudreau 2017, p. 15).

This definition remains abstract. Several examples can be used to show how this is achieved. The first, cited by Garvin et al. (2013) in a case study, comes from Google, which has created a People Analytics team dedicated to analyzing HR data to model certain HR phenomena. Google mainly employs engineers and has a culture of data analysis. These two characteristics create strong incentives to adopt an EBM approach in an attempt to demonstrate to employees the contribution of the HR function to the company’s daily organization and performance. The People Analytics team also examined the question of the role of the manager: is the manager really essential? And, if so, what is a good manager? To answer these two questions, it began by collecting information on the reasons for employees’ departure in the exit interviews. However, as this information was not sufficient, the team focused on assessing the link between team performance and manager satisfaction. As the relationship is measured as significantly positive (managers with the highest satisfaction rate also have less turnover in their team, a higher well-being index and higher performance, for example), the next step was to define what a “good” manager is. To this end, interviews were conducted with managers who received the highest and lowest ratings. A semantic analysis was carried out on these interviews and made it possible to define eight managerial “good practices”. Finally, employees were asked to rate their managers on these eight practices in order to individually target the training needs of each manager. This example therefore uses relatively simple statistical methods, with the exception of semantic analysis, but differs from a reporting or dashboard approach in its ability to integrate a heterogeneous set of data into a meaningful approach and model.

The second example is on the modeling of workplace accidents (Pertinant et al. 2017). Occupational injury is a highly predictable phenomenon, in the sense that it is determined by a number of variables that can be easily identified. To do this, the authors use regression techniques that refer to “all other things being equal” reasoning. The principle of this reasoning consists of comparing identical profiles, differing only on one point. This then makes it possible to isolate the effect of this factor on the variable of interest (in this case, whether or not there is an accident). Methodologically, this means studying the effect of a single characteristic, while controlling the effect of the other characteristics. This method has the great advantage of isolating the effect of one or more variables on another, especially in the case of multifactorial phenomena. On the other hand, it is sometimes criticized because it does not adequately reflect the actual mechanisms. In other words, the “all other things being equal” situation is an artifice that does not exist in reality. The authors are able to isolate the effect of exposure types on injury (Box 1.12). This example therefore illustrates a case where a complex and multivariate phenomenon, such as accidenteeism, manages to be analyzed in detail, taking into account all the explanatory variables.

The third example comes from a research study on well-being at work (Salanova et al. 2014). Researchers collect data on employees from different companies and sectors, and end up with a typology of four types of employees: “relaxed”, “enthusiastic”, “workaholic”, and “burned-out” (Box 1.13). This typology, beyond its own interest for HR, has the advantage of underlining the importance of the “pleasure” factor in work, since it is the dimension that most structures the typology. The typology method can prove valuable in HR, as it has long been in marketing, since it makes it possible to segment a population (that of employees, for example). As a result, it offers the opportunity to consider the employee population not as a homogeneous whole, but to identify different groups of employees not predetermined in advance. Once again, this method is based on taking into account a large number of variables simultaneously, and therefore goes beyond the limits of bivariate reasoning.

1.2.2.2. A multivariate approach, for an analytical, decision-making and argumentative purpose

The examples given above make it possible to more precisely characterize the objectives put forward for HR analytics. We focus here on three of these objectives: the analysis of complex HR phenomena, decision-making and argumentation.

As we have pointed out, many HR phenomena are multifactorial, and therefore too complex to be understood by simple cross-tabulating several variables. HR analytics is seen as a way to overcome this problem by providing statistical methods capable of integrating several factors or variables simultaneously, as we have seen in the example of the determinants of work accidents (Box 1.12).

The examples given in the previous section also show the importance of the decision-making purpose of HR analytics. Indeed, in several of the cases presented, the purpose of data analysis is to guide or inform decision-making. This inclusion in the EBM approach is present in managerial speeches, but also in some research on the subject (Lawler et al. 2010). The decision-making purpose is also to be linked to the analytical purpose.

Indeed, the name of the EBM approach emphasizes the notion of “evidence”. In the context of HR analytics, it is both the sophistication of the methods used and their valorization by the scientific community as “scientific” methods that provide this guarantee of proof. The underlying idea is that a better analysis of reality provides better evidence and ultimately informs decision making. This argument may seem difficult to refute at first sight.

For example, in the example of the gender pay gap, the use of a scientifically proven decomposition provides a more accurate picture of the causes of pay gaps, and thus allows more appropriate methods to be defined (Box 1.10). However, it is also possible to argue that this decomposition of the pay gap tends to justify the part of the pay gap explained by differences in the characteristics of the female and male populations, and thus clears companies of any responsibility for this gap and its reduction (Meulders et al. 2005).

Finally, HR analytics also has an argumentative purpose. Thus, the results of the data analysis can be used to support statements or theses. For example, in the project conducted by Google, the results confirm or validate the hypothesis of managers’ influence on the performance of their team and therefore, more generally, the company.

More specifically, data analysis is sometimes used to demonstrate the importance of the HR function, in particular by demonstrating the existence of measurable links between variables related to HR activity and performance variables. The Le Louarn staircase model (2008) was a first step in this approach, which is often described as a business case, and which has greatly benefited from the contributions of HR analytics, seen as providing more rigorous evidence of the existence of these links.

We can give two examples of business cases here. The first is the business case of commitment, developed in particular by Gallup (see Box 1.9 on Gallup). Indeed, in parallel with the construction of a commitment scale, Gallup conducts quantitative studies on the link between employee engagement and company performance. As a result, the firm is able to show in its 2017 report that those in the highest quartile of employee engagement are 17% more productive and 21% more profitable than those in the lowest quartile.

The second example of a business case concerns gender diversity in companies. A relatively significant body of research has developed around the measurement of the link between gender diversity (of staff, boards of directors, executive committees) and the performance (economic, financial) of companies. At the same time, a managerial discourse has also spread on the subject, under the impetus of diversity departments wishing to enhance and legitimize their action (Box 1.14). Once again, sophisticated methods of data analysis have supported and partially legitimized these discourses.

These two examples highlight the argumentative purpose of HR analytics, which validates but, above all, supports managerial theses or speeches, particularly on the importance of the HR function in the company.

Finally, HR analytics addresses some of the limitations of reporting and dashboards, while being part of a relatively similar approach to using data to analyze phenomena at the organizational level and to improve decision-making. In particular, HR analytics, reporting and dashboards do not focus on the individual level, since they only provide aggregate indicators. Nevertheless, HR analytics is more explicitly part of an analysis, decision-making and argumentation process.

1.3. Big Data and the use of HR algorithms

More recently, the emergence of Big Data and the rapid spread of the notion of algorithms and artificial intelligence have created new uses for HR quantification. While these uses are still in their infancy and, to date, represent a horizon, more than a concrete reality, they introduce relatively significant breaks with reporting, dashboards and HR analytics.

1.3.1. Big Data in HR: definitions and examples

Several terms have emerged in the wake of the notion of “big data”1. “Big data” refers to very large data produced partly as a result of digitization: data from social networks, biometric sensors and geolocation, for example (Pentland 2014); “Big Data” refers to new uses, methods and objectives related to these data; “algorithms” refers to one of these new uses, perhaps the one that causes the most important changes in everyday life; and finally, artificial intelligence refers to a particular class of learning algorithms which are capable of performing tasks previously reserved for human beings (image recognition, for example). In any case, it seems necessary to make an effort to define these different terms. Moreover, the transposition of these terms and concepts into the HR field is not without its difficulties.

1.3.1.1. Big Data: generic definitions

The term Big Data remains poorly defined (Ollion and Boelaert 2015). In 2001, Gartner defined Big Data using three “Vs” (Raguseo 2018). First of all, the Volume of data must be large. Even if few definitions are given of a “significant volume”, it can be stressed, for example, that this volume may require working on specific servers and storage platforms instead of traditional computers. These data are also characterized by their Variety, in the sense that heterogeneous data sources (internal and external) can be used, and that they can be structured and unstructured (unstructured data, such as text or images, cannot be stored in a traditional spreadsheet, unlike structured data). Finally, these data are dynamic, i.e. updated in real time, which is called Velocity. Two other “V”s have been added more recently (Bello-Orgaz et al. 2016, Erevelles et al. 2016): Veracity, referring to the issue of data quality, and Value, referring to the idea of benefiting from these data. This definition therefore emphasizes the technical characteristics of the data mobilized (volume, variety, velocity, quality), and leaves in the background the question of the methods used to process these data and the purposes of these processing operations. Yet, Kitchin and McArdle (2016) show that very few data sets usually considered as “big data” (Internet searches, image sharing on social networks, data produced by mobile applications, etc.) correspond to the different characteristics identified. It is then necessary to review this definition, focusing on other dimensions: different methods for utilizing the data, uses, etc.

Other works focus more on methods. Mayer-Schönberger and Cukier (2014) quickly evoke, in addition to the technical characteristics of the data, the passage from a paradigm of causality to a paradigm of correlation. More specifically, they argue that, if correlation analysis does not provide information on the nature of a relationship between two variables, it is sufficient to conduct so-called predictive analyses, as this identifies observable data that provide information on the behavior of unobservable data (Box 1.15). In a similar line of thought, Kitchin (2014) questions the need to review our epistemological research paradigms in light of the emergence of such large volumes of data, which could lead us from a knowledge-driven science to a data-driven science, i.e. a totally inductive science from which all lessons are drawn from the data, or deductive, but with assumptions generated from the data and not from theory. The same trend could be observed within organizations, with the advent of data-driven management (Raguseo 2018).

Finally, some researchers are interested in the uses of Big Data. In particular, they highlight the notion of algorithms: algorithms for suggesting content, ranking Internet pages, predicting for the insurance or justice sector (Cardon 2015, O’Neil 2016), etc. While the notion of an algorithm is very old and simply refers to a finite sequence of instructions, an important evolution has recently been introduced by the emergence of learning algorithms, which evolve according to input data, and which have allowed significant progress in the field of artificial intelligence (CNIL 2017). Cardon points out that this type of algorithm is particularly useful for processing large volumes of data, prioritizing them and selecting the information to present to the user. He insists on the fact that few aspects of contemporary society escape the presence of algorithms: Internet research, consumption of cultural products, decision-making, etc. This omnipresence of algorithms, linked to the explosion in the volume of data produced by individuals leads to what he calls the “computing society”. In particular, he defines four main types of algorithms that structure our relationship to the Internet and therefore to information:

  • – audience and popularity measurements: counting the number of clicks (e.g. Médiamétrie);
  • – authority measures: ranking of sites based on the fact that they can be referenced on other sites (e.g. PageRank);
  • – reputation measures: counting the number of exchanges a content gives rise to (e.g. number of retweets on Twitter);
  • – predictive measures: customization of the content offered to the user according to the traces they leave on the Internet (e.g. Amazon product recommendations).

According to Cardon, these different types of algorithms profoundly structure more and more aspects of our lives, beyond our relationship to information. O’Neil (2016) goes further, pointing out that algorithms can have a very strong influence on our lives, since they have been used to make decisions in many areas (health, insurance, police, justice, etc.), and that they present a high risk of increasing inequality (Box 1.16).

These two authors therefore show the new possibilities created by the use of algorithms, and, among other things, the risks associated with them. Other authors are more positive, recalling the many advances made possible by the use of algorithms in the field of medicine: better quality diagnostics, for example (Mayer-Schönberger and Cukier 2014).

Thus, the Big Data phenomenon corresponds to a combination of the technical characteristics of data, the methodologies mobilized to process them and the emergence of new uses.

1.3.1.2. Big Data and the use of HR algorithms

However, the transposition of the concepts of Big Data and algorithms into HR is not so obvious.

First, in terms of the technical characteristics of the data, the various “Vs” mentioned above are in fact rarely found in the HR field. For example, the volume of data contained in an HRIS rarely exceeds a computer’s storage capacity. The data of interest for the HR function that are the most voluminous are undoubtedly those related to emails (email exchange flows, or even exchange content), but the use of email data remains underdeveloped. Second, HR data are rarely updated in real time: for example, annual reports containing figures for HR are usually produced several months after the end of the year, reflecting the difficulty in producing the necessary data and making it reliable. On the other hand, HR has a certain variety of data sources, from HRIS data to data contained in candidates’ CVs, as well as data produced by employees on the internal social network, for example.

However, so far, it is the data from the HRIS that has been most mobilized, for example, in the context of reporting, dashboards or HR analytics (Angrave et al. 2016). Social network data such as CV data have only more recently emerged as statistically mobilizable data, which is probably related to the fact that they are unstructured data. Thus, as we have seen, many companies commission time- and energy-consuming engagement surveys, and the idea that content from internal social networks could be used as an indicator of employee opinion and social climate has only recently emerged. Similarly, the possibility of using biometric sensors to measure cooperation or exchanges between or within teams is still relatively recent and little explored (Pentland 2014).

The second obstacle to the transposition of Big Data and algorithms into HR comes from the difficulty in identifying potential new uses. For example, among the algorithms identified by Cardon (2015) and described above, which could be relevant in HR? With our limited hindsight and the few examples we have today, it seems that prediction algorithms are the most transposable into the HR field, whether in recruitment, internal mobility, training or HR risk prediction (Box 1.17).

Beyond these cases of Big Data use in HR, another notion has emerged in connection with the emergence of the platform economy (Beer 2019): management by algorithms (or “algorithmic management”). This notion refers to an increasingly frequent reality. On platforms such as Uber, or Amazon Mechanical Turk, for example, a worker receives their work through an algorithm, not from a human person, and the pay for this work is also determined by the algorithm (Box 1.18). This notion goes beyond the strict framework of HR since it concerns management in general, but raises many questions for the HR function. In particular, major HR processes such as recruitment, training, mobility and career management can be totally disrupted by this operating model. In addition, this notion requires a rethinking of the balance of the HR–manager–employee triptych, for which the HR function is often the guarantor.

Finally, even if the transposability of the notion of Big Data into the HR field raises certain questions, it is indeed possible to identify uses that are similar to this notion. However, how do these uses differ from the other quantification uses mentioned in the previous sections?

1.3.2. The breaks introduced by Big Data in HR

The different uses identified in the previous section make it possible to identify three major breaks introduced by Big Data in HR: automation, prediction and customization. Algorithms also present potentialities and dangers for the HR function. The term “break” may seem strong, but it corresponds to fundamental changes, both in the HR stance and in the definition of quantification. In the next chapter, we will discuss the meaning and implications of these fundamental changes. The main purpose of this section is first to describe them.

1.3.2.1. Automation

Algorithms and Big Data in HR are positioned in relation to an automation horizon (whether getting closer or further away). Thus, two discourses coexist on the subject. The first is explicitly aimed at automation. For example, CV pre-selection algorithms are often presented as effective substitutes for recruitment managers: faster, more efficient, less expensive, etc. However, this discourse may be confronted with risks of resistance to change and concerns about the future of the HR function. Moreover, in some countries, leaving decision-making entirely to an algorithm on a subject as important as recruitment remains legally and socially unacceptable. Another discourse therefore coexists with the first: it insists on the fact that these algorithms aim to be decision aids but in no way to replace human decision-making. Thus, IBM Watson’s recruitment algorithm (Watson Recruitment) is presented as a decision-making tool by structuring and selecting the information to be presented to the recruitment manager, who no longer needs to read all the CVs since a summary is made3.

1.3.2.2. Prediction

Algorithms and Big Data also provide a predictive dimension to the HR function. Unlike HR reports, dashboards and analytics that focus on understanding past phenomena to make decisions in the present, Big Data HR uses past data to predict behaviors or wishes. Thus, constructing an algorithm of job or training suggestions is like trying to predict which job or training will be of interest or most suitable for which employee. The example of the prediction of HR risks such as resignations or absenteeism was also given. In this respect, this approach is finally, to some extent, similar to the quantification of the individual mobilized in the context of recruitment and promotion and presented in section 1.1: quantification is mobilized for predictive purposes. This predictive dimension breaks with two other dimensions, present in reporting, dashboards and analytics: the descriptive dimension and the explanatory dimension. However, this rupture is not as strong as it seems at first sight. Indeed, from a methodological point of view, the methods that make it possible to explain (linear or logistic regression methods, for example) are generally the same as those that make it possible to predict, since they make it possible to identify the determinants of a variable that one is trying to explain (or predict). On the other hand, the disruption takes place on two levels. First of all, this implies a change in the positioning of the HR function. Second, it raises new ethical questions in HR. Chapters 2 and 5 will come back to this.

1.3.2.3. Customization

Finally, several examples were given where algorithms and Big Data allow for some form of customization: sending customized suggestions for positions and training, for example. The idea that mobilizing large volumes of data can allow for some form of customization may seem counterintuitive. However, the underlying principle is that the multiplication of data on individuals offers the possibility of gaining in accuracy and thus returning to the individual. This goes beyond the simple idea of segmentation, which is based on connecting individuals to large “groups” and has had some success, both in HR and marketing. Indeed, a collaborative filtering algorithm could theoretically result in a set of unique suggestions for each individual. In practice, this case is rarely verified, but the interindividual variety of the sets of suggestions potentially sent is well in line with a customization logic. Once again, this raises relatively new HR questions and issues, which we will come back to in the next chapter.

1.3.2.4. HR algorithms: potential, dangers and issues

As we have seen, algorithms are increasingly present and have an increasing influence on different aspects of our daily or professional lives. They offer many possibilities, both for the HR function, the organization and employees. For the HR function, algorithms can present potential sources of productivity gains, if they save time on tasks of little additional value, as shown in the Watson Recruitment video. Productivity gains can also come from predicting HR risks. An HR function capable of predicting absenteeism or resignations can thus take the necessary measures upstream to avoid or at least limit the associated losses, for example, by offering schedules that include absenteeism prediction, or by defining more appropriate retention programs. They also represent an opportunity for the HR function to reflect on its own positioning (particularly in relation to the transition from description or explanation to prediction). Finally, they provide the opportunity to offer new services to employees (suggestions for training, for example) and perhaps gain legitimacy from them. For the organization, as we have seen, management by algorithms is sometimes presented as a tool to better allocate tasks and resources, thus aiming at productivity gains. Finally, employees may find it beneficial to access services such as customized training suggestions.

However, algorithms also present dangers, which we have already mentioned and some of which we will return to. We have thus referred to the risks of discrimination and inequality highlighted, for example, by O’Neil (2016). In addition to these risks, there is a lack of transparency in some algorithms, which remain poorly accessible to most of the individuals concerned and whose operating modes are rarely explained (Christin 2017). Finally, these algorithms raise the question of the hegemony of the machine over the human being and the responsibility of decision-making: who is responsible and who is accountable for the decisions made by an algorithm? Its designers? Its users?

Finally, one of the major challenges also lies in the possibilities of studying and analyzing these algorithms. A current of research is beginning to develop around this question, and highlights the need for an “ethnography of algorithms”, i.e. a science that would study the different actors involved in the construction and use of algorithms, and that would highlight not only the technical aspects, but also the political and social choices underlying these two stages (Lowrie 2017, Seaver 2017).

This first chapter therefore allowed us to delineate different types of use of HR quantification: quantification of the individual and work to inform and support decision-making, reporting and dashboards to describe situations and HR analytics to analyze them, and algorithms and Big Data HR as emerging trends embodying a use of data for automation, prediction and customization purposes. The following chapters focus on a dimension outlined in this chapter: the link between quantification and decision-making, the appropriation of these tools by certain agents, the subsequent effects on the HR function, and the ethical and legal dimensions.

  1. 1 In the rest of the book and for the sake of clarity, we write “big data” in lower case when we refer to data (big data, plural), and “Big Data” in upper case and singular when we refer to all new uses related to these data.
  2. 2 Google raters complement the work done by Google’s algorithm by manually evaluating the quality of web pages referenced by Google, but most often work for intermediary companies that sell their work to Google. Turkers work on Amazon’s digital work platform (Amazon Mechanical Turk) and provide tasks to complement the work of Internet algorithms (e.g. rating audio or video files, content moderation).
  3. 3 See, for example, https://www.youtube.com/watch?v=ZSX75SIySiE (accessed October 2019).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset