CHAPTER 3
Our Inheritance

Humankind is facing some of the greatest existential and social challenges in history.

As science and technology have progressed, problems, or networks of problems, have become increasingly complex. The systems that make up our society have been scaffolded on top of other systems and are bending under the pressures of war, the climate crisis, child labor, racism, social divide, and the societal push to continuously build and stack more and more systems. In 2020, the word “systemic” leapt from the academic world to the international vernacular almost overnight, and no word better describes the interlocking nature of these challenges.

What does that have to do with artificial intelligence, the Internet of Things, digital twins/simulations, robotics, and mixed reality?

There has traditionally been an artificial divide between social systems and organizational systems. In the context of that divide, the development and application of these technologies within organizations would have nothing to do with social problems.

Fortunately or unfortunately, depending on your perspective, this divide has been shattered by the Information Age. People want to know where their food comes from, how their clothing is made, how energy is delivered to their homes, if their bank has bias in its loan application programs, and so on—and if they do not like the answer, they push for change through purchasing decisions, boycotts, strikes, and seeking policy change.

This means that initiatives within organizations that benefit the organization at the expense of society have become unviable. Initiatives that benefit the organization without consideration of societal impact are the most common, although they are decreasing, as organizations have begun building ethical practices into their rhythms of business. Initiatives that benefit the organization with a neutral societal impact are also common and neutrality is becoming a new standard. Then there are those who seek to use their influence and resources to create a positive societal impact.

I once met a vice president at one of the largest chocolate companies in the world who wanted to guarantee that there was no child labor in his company's supply chain.

He had considerable influence, was highly motivated, and was well positioned to take on this problem, or so it would seem.

If his company directly employed children, this problem would have been quickly solved by enforcing company policy, terminating those employees, and creating more governance to ensure it did not happen again.

Unfortunately, his situation was significantly more complicated. His company bought processed chocolate from a company that bought raw cacao from other companies that bought raw cacao from farmers. In other words, there were at least three layers between him and the farmers, and he had no information on the farmers or the companies that bought and aggregated the raw cacao from the farmers.

This was a systemic problem over which his organization, despite being an integral part of the system, could exercise no influence. The situation could be likened to a steering wheel, which, despite being a fundamental part within the system of an automobile and that which is closest to the customer (or driver), does not have the ability to fix a problem with the engine.

We met and discussed many ways that we might be able to resolve this challenge, such as creating a coalition of chocolate companies that may then be able to exercise influence over the companies that process their chocolate, which could then influence the companies that buy and aggregate raw cacao to share the list of farmers from whom they source chocolate. We discussed then using satellite imagery or drones to regularly inspect the farms, artificial intelligence to process the video data and identify any suspected child labor, blockchain to connect individual farms to batches of chocolate, and working with local government leaders for enforcement.

In other words, all of our ideas existed within the context of the system and were focused on either improving the system or adding new parts to the system. We were examining the situation through a mechanistic worldview.

Our Inheritance from the Industrial Revolution

There are countless examples like the above, where good intentions collide with systemic challenges and, despite alignment at the highest levels of the organization, strong ideas, and shared passion and momentum, the initiative does not move forward.

The issue lay not in the individual components, but in the process by which we endeavor to solve complex problems, which is incapable of producing the results we desire.

This is because these systems were built either within the context of the Industrial Revolution or on the foundation of systems built within the context of the Industrial Revolution.

Since the Industrial Revolution, organizations around the world have been incrementally improving and building on the original systems created during the Industrial Revolution with new mechanistic capabilities throughout the twentieth century, and with an overlay of digital capabilities in the era of Digital Reformation and Transformation.

This orientation is evident in the press and academic announcements around real‐world applications of the next iteration of the Industrial Revolution (or Industry x.0). The Industrial Revolution broke work down to its most basic unit, referred to as a work element. In the ensuing era, all work elements that could be mechanized were mechanized, and all remaining work was assigned to human workers, in a move that treated workers like machines.

Subsequently, new machines were developed to take on additional tasks that had to be relegated to humans, which further devalued the human worker.

When digital capabilities were introduced to the market and what had been analog began to transition to digital wherever possible, more work could be done by the same number of people, but the relationship between humans and machines in the workplace remained the same.

At the organizational level, the systems designed in the Industrial Revolution, and their mechanistic worldview, remain intact, such as in organizational structures and in departments within universities. Generally speaking, students are aligned to a part of the system in which they are most interested in functioning. They then are instructed how best to operate as that part within a system, such as accounting, marketing, or engineering. Then, in most organizations around the world, graduates are hired into the functions for which they have been trained, and they serve as a functioning part of the broader system.

In other words, our organizations, companies, and governments have been developed from a mechanistic worldview—that is, as machines—based on the most advanced thinking of the Industrial Revolution. Since then, each generation has been charged with the maintenance and incremental improvement of those systems. This proceeding has met its logical end on two fronts.

First, incremental improvements cannot fundamentally change a system. This means that, despite the best intentions of individuals who would change the world, the existential and social challenges faced by our society today cannot be met by our existing systems and trajectory. To borrow from an earlier chapter, you cannot reform your way into a better future.

Second, the management and organizational systems of the Industrial Revolution are incompatible with the latest wave of technological advancements. This is evident in the statistics of how many machine learning models make it into production (13%) and how many businesses are reporting little to no return on investment for their initiatives focused on artificial intelligence, digital twins and simulations, the Internet of Things, robotics, and mixed reality.

Taylorism, or Scientific Management

Taylorism, named after its inventor, Frederick Winslow Taylor, is predicated on the idea that there is one “best” way to do any task, and that by analysis—breaking the task into smaller, simpler parts and studying each part separately—we can find the most efficient and productive way to perform the task as a whole. It was a revolutionary approach to management that transformed the way factories were run in the early twentieth century.

This was not just about increasing efficiency and productivity; it was about a fundamental shift in the way people thought about work. It was about the belief that the best way to improve the performance of any worker was to study their every move and find ways to make them do their job faster and better. It was about the idea that there was always a better way to do things, and that the best way was the one that required the least amount of effort. This shift in thinking was groundbreaking at the time, applying the scientific method to management (hence its other name, scientific management).

But as we delve deeper, we begin to see the hidden costs of Taylorism. The application of Taylorism in the workplace led to a dehumanization of work, treating workers as cogs in a machine and taking away their autonomy.

Taylorism also led to the systematic removal of creativity and spontaneity, as every task is standardized, and there is no room for improvement or experimentation. Tasks were defined by “experts” and executed by workers.1

Throughout the twentieth century, Taylorism continued to change the way people thought about work and efficiency, and laid the foundation for other disciplines that apply the scientific method, continuing to shape the way people view the worker and their role in the workplace.

A short list of examples of tools that Taylorism contributed to management thinking illustrates its pervasiveness in corporate culture even today: process analysis, process mapping, elimination of waste, process optimization, knowledge transfer, measures of efficiency, process documentation, and best practices.

The application of the scientific method to our thinking processes has become so pervasive that it influences how organizations brainstorm, the criteria by which ideas are selected, the design of experiments, and measurements of effectiveness—it is comfortable because it is objective and abdicates risk.

The scientific method is dangerous when it leads to a focus on measurement over impact, particularly within an organization that operates mechanistically, as individual parts within a larger machine.

This can be observed in a standard monthly business review, filled with summarized metrics, in which each team is achieving its goals disconnected from the performance of the overall organization. If the marketing team has satisfied its quota for lead generation and the total number of impressions and engagement is higher than ever, and the sales team has had a record quarter, closing more deals than ever before, and the engineering team has increased product quality and output over last quarter while lowering cost, who is accountable for the fact that customers are choosing a competitor's product?

Pause and consider what your mind did with that question. Did you think of the other organizations that could have been involved? Maybe checking customer service metrics would reveal that customers are frustrated with our customer service process. It could also be a design problem, but then again the designers met the design specifications on time and under budget.

In other words, analysis has become an automatic response to most questions—taking the system apart and looking for objective data from which to understand a larger problem. And this is an important method by which to explain what is going on within a system. But it cannot explain why. Analysis provides explanation, but not understanding.

Diagnostic analytics is the combination of analysis and synthesis to provide understanding and answer the question of why: analysis to understand what happened and where it happened (production has slowed in factory x), synthesis to examine the larger context and form hypotheses (have there been any changes in management or process in factory x?), analysis to test those hypotheses—cycling through this process until an answer is found. In the context of analytics and data science, analysis falls within the expertise of the data scientist or machine learning engineer. Synthesis in this context requires an understanding of the domain, which therefore requires the domain expert. There are many organizations that have found that balance within diagnostic analytics through persistent trial and error, but it should be noted that the formal definitions of diagnostic analytics from credible and widely acknowledged resources do not include synthesis, and explanations of the process only mention the need for technical resources.

This is one of the contributing factors to the failure of artificial intelligence and emerging technological initiatives. Leaders and managers seeking to better understand the performance of their companies have been excited by the ability for machine learning to process more data and faster than a human could, and thereby glean insights into how the company could better perform. Unfortunately, by nature of the fact that machine learning is an analysis tool, while it can be leveraged to explain what is happening in the organization, it cannot explain why. Additionally, predictive and prescriptive analytics, both of which are phenomenal capabilities within the proper contexts, are easy to misinterpret based on their naming conventions. Predictive analytics is a perfect tool for applying the scientific method to a business process: Based on what has happened, if the context stays the same, what will happen again? Prescriptive analytics takes this a step further by combining predictions with business rules: Based on what has happened, if the context stays the same, what should be done when it happens again?

There have been many notable examples of the proper implementation of these technologies, but organizations that have invested in initiatives seeking to implement predictive and prescriptive analytics within their organizations to identify and understand things that their own experts do not understand about what will happen or what to do in the future are exemplifying the art of the impossible and designing for failure. This experience has produced many leaders who consider themselves to have tried artificial intelligence and now believe it to just be hype or that the technology is not quite where it needs to be to be useful or practical.

Data Science Taylorism*

The principle process by which Taylorism was applied to increase effectiveness in the twentieth century was through careful observation by an external expert holding a stopwatch, meticulously documenting, timing, and applying the scientific method to analyze workers’ production processes. These external experts then produced an optimized design for each individual task, a process by which the skills to complete those tasks could best be learned, and a proposed schedule for each individual worker with a targeted threshold of quality and output.

Unfortunately, this is almost indistinguishable from the principle process by which many consulting and technology firms have approached the application of data science for their customers and clients.

In Taylorism, scientific management experts collaborated with managers in order to improve the work of the “man of sluggish type, […] an educated mechanic, or even an intelligent laborer.”2

In Data Science Taylorism, data science experts collaborate with managers with the aim of gleaning answers in the data to then educate or better assign tasks to the engineers and front‐line workers. This approach dismisses the expertise of those closest to the customers and processes in favor of data.

I have observed well‐meaning individuals attempt to follow this process in hopes of achieving value on behalf of customers and clients, but this approach increases the social divide between technologists and domain experts (more on that in Chapter 13), drastically reduces the likelihood of project success, and can be credited for a healthy portion of the 87% of machine learning models that never make it into production.

Notes

  1. * Data Science Taylorism is distinct from “Digital Taylorism” or “New Taylorism,” which has been defined as the use of monitoring technology to apply a form of scientific management at scale, where the scientific management expert with the stopwatch has been replaced by cameras, sensors, and automated data analysis.
  2. 1 Pierre Bourdieu, The Social Structures of the Economy (Polity Press, 2005).
  3. 2 F. W. Taylor, the story of Schmidt, from Chapter 2 of Principles of Scientific Management (1910).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset