CHAPTER 19

Artificial Intelligence

The Next Era

What’s common to stock trading, vacuum cleaning, language recognition, creating movie trailers, Mergers & Acquisitions (M&A) deal research, image recognition, data center energy consumption optimization, driving cars, and predicting hypoglycemic events in diabetes? Yes, you’ve probably guessed—they are all things that can and are being done with the help of artificial intelligence, or AI. AI has long been seen as the pinnacle of computing intelligence. It has taken many forms, some humanoid and some not. It spans popular culture, computer science, and fantasy fiction.

What Is AI?

The term AI was coined in the 1950s, largely based on the vision that computers could one day be as intelligent as humans. Today, as we get closer to this in some aspects, it seems to me that this definition is flawed. Partly because machine intelligence works fundamentally differently to human intelligence, and also because human intelligence is not a destination, just a milestone. There’s no reason why a machine can’t be smarter than humans, in principle.

Perhaps it’s more constructive to think of AI as automating decision making. Rather than the finite and predetermined if-then-else options to choose from and the logic to be followed, in almost every other area of software, AI systems are designed to learn the decision logic, or the choices, or both. The evolution of chess playing computers highlights this perfectly.

In 1997, the IBM computer Deep Blue defeated Garry Kasparov in a best of six game match up. It was the first time a computer had won a match against a grandmaster. This was the result of 40 years of evolution for chess-playing computers. The first generations of chess playing computers were built by chess-playing engineers who would embed their knowledge and choices into a set of instructions. Consequently, they were only as good as their programmers. Then, as computing power grew, computers would essentially use brute force computing to calculate all possible options over a multiple number of moves and effectively try to out-compute humans. This worked against amateurs, but despite the advancements in Moore’s law, was quite limited against seasoned players because good chess players play by pattern recognition and don’t try to compute all the moves anyway. But then came the breakthrough in the 1990s—when engineers figured out a way of making the program learn from previous games, work out probabilities, as well as perform a high number of calculations around the options. This was the version of Deep Blue that finally defeated Kasparov. The chess story doesn’t end there, but we’ll come back to it.

AI systems therefore combine a learning ability with inference-making. For example: all cars have four wheels. The Ford Fiesta is a car. Therefore, the Ford Fiesta has four wheels. It’s intuitive for us to get to the third statement given the first two. But giving a computer that capability is harder. How did we get here? While AI is defined as a catch-all phrase, the specific enabling technology that has brought us to AI as we know it today is machine learning. As with the earlier chess example, allowing a machine to learn from the data available accelerates the capability improvement of the system. So rather than try and design an intelligent system, the focus is on designing a learning system, because computers naturally process raw data much faster than humans.

There are two broad methods for machines to learn: supervised and unsupervised. Supervised learning involves exposing an AI system to a lot of labeled data (say pictures of cats), so that it can start to recognize cats, and as importantly, distinguish between cats and not cats. This is not different to how we teach children. A two-year-old growing up in London (let’s caller her Zianna) might mistake a cat for a dog. But we keep showing her more examples, and quite soon, she understands how to distinguish them.

But imagine a situation as in the story of Jungle Book, where a human is growing up among the animals in the jungle (let’s caller her Jane). She has no supervision, but it is likely that soon enough, she will distinguish between cats and dogs, and many other species that she is exposed to. She might not call them by any recognizable name. She might have a completely different way of telling them apart. Perhaps she smells them differently, knows their unique sounds, or can tell by their footprints. This is broadly how unsupervised learning works. The system is let loose on the data and allowed to make its own connections and inferences.

What’s the difference? Supervised learning works better for some types of problems such as classification (cars vs. buses) and regression (rainfall projections). Unsupervised learning works for clustering (consumer segmentation) and association (customers who bought this, also bought that). Deep Blue was a form of supervised learning and it’s defining feature was the ability to make chess decisions at a level of competence beyond that of any of its designers and engineers.

Chess-playing computers have evolved significantly since 1997. More modern versions, which have culminated in Alpha Zero from DeepMind (an Alphabet/Google company), have created a method where the computer is taught the rules of the game and then left alone to play millions of games against itself—44 million in the first nine hours, according to the company. As it does so, it starts to assign probability values for each move—in terms of its contribution to winning/losing games. It then starts to get better at a blindingly fast speed. Here’s a statistic for you. In 2017, the world’s leading chess-playing computer—Stockfish—could evaluate 70 million positions in a second. Alpha Zero took four hours to play better chess than Stockfish, from a standing start, and easily defeated Stockfish in a 100-game match.

Chess is a game where the same data is always available to all players (and observers). There is no data asymmetry, unlike card games, for example. Business has a lot of data asymmetry, so we need to add a few layers of understanding to this.

Key AI Concepts

What Is Reinforcement Learning?

This idea that the system learns at speed comes from reinforcement learning. The system is given a reward for the right answer and a punishment for the wrong one, which incentivizes the learning. A contribution to a winning position versus a resultant losing position for example. Note that this approach has at its foundation a notion of desired outcomes and a good versus bad framework. In our example of the children, Jane might learn faster than Zianna if her reward/punishment mechanism is a stronger one, for example, she gets scratched if she doesn’t recognize a fox. (Note: I do not suggest any kind of harsh reward/punishment model for children!)

What Is an algorithm?

An algorithm is a sequence of steps and calculations that are required to get to a specific outcome. This is distinct from calculating a specific value, which is a formula. For example, how to reorganize a shelf of books in alphabetical order in the most efficient way may require an algorithm. Or separating socks by colors. Or estimating the area of a polygon. Even baking a cake with a recipe is an algorithm. A learning algorithm is one that looks at the success of previous decisions and uses it for future instances of the task.

What Is a Neural Network?

A group of algorithms working together is a neural network. Each performs specific tasks, collectively allowing the neural network to make decisions. Collectively, complex decisions can be made by relying on multiple layers of neural networks. GPT-3 is a neural network machine that is used for generating human-like text. GPT-3 uses hundreds of billions of probabilistic calculations to figure out what text will be most optimal.

What Is Deep Learning?

Deep learning is a subset of machine learning that uses multiple layers of neural networks. For instance, in order to recognize human faces in a photograph, you have to be able to first distinguish human faces from other oval shapes, such as balloons. Then you have to look at individual features, such as eyes, nose, shape and size of the face, and so on. Deep learning works by assigning these subproblems to the layers of neural networks so that collectively, the AI system cannot only identify whether or an object is a human face, but also recognize the same person across multiple photographs. You only have to go to Google Photos to see how this works, you can select a person, who has been identified and Google Photos can show you all the pictures among your thousands of snaps that have the person in them.

What AI Is Not

It’s not a humanoid robot. There are many instances, especially in literature and films, where AI is embodied through a robot or a human. In the 2001 movie AI, for example, it’s a little boy. While, the anthropomorphized versions may be more intuitive for us to understand. Think of the distinction between the body and the mind. It’s hard for us to imagine the mind without associating it with a body or at least a brain. But in reality, the mind is an abstract concept and should ideally be form independent. Similarly, AI is a conceptual construct, and we should refrain from giving it any specific physical manifestation—human or otherwise.

AI and Optimization

In a way, AI is the logical culmination of the model that we’ve used in this book. In the cycle of connect–quantify–optimize, the AI kicks in with the optimization models. AI will help companies redefine themselves, morph their services and products, and reshape service bundles.

If you agree with the idea that the creative aspects of cognition are among the hardest for computers to emulate, and that strategy is essentially a creative exercise, then there may be need for humans and AI to work together in defining strategy for the foreseeable future. The best chess players in Free Style Chess are human–computer combination teams (centaurs, as they are called). But there is no doubt in my mind that the strategic winners will be those that can effectively harness AI at the core of the business, feed the engine via data generated and gathered at scale through all the digital interfaces of the company and other players, and continue to build more innovative interfaces with their customers. This is why, the end game for Google and Amazon is AI and if you consider the number of ways in which we already interact with these businesses and provide them with transactional and interactional data, you will no doubt immediately grasp their ability to build highly nourished AI engines.

Optimization is not formulaic, nor predictable. We don’t know how we will optimize a business model—that is, tweak or transform it—until we get the interaction and continuous data stream. It’s also a learning pattern where AI will very likely end up being much better than humans in the long term.

So, You Think the Brain Is Superior to the Computer?

Every discussion on the power of computers is bracketed by the comparison to the human brain and the dwarfing of any known computer by the fantastical power of the human brain. Estimates by Ray Kurzweil suggested a calculations-per-second (cps) capability of 10 quadrillion cps. And it runs on 20 watts of power. By comparison, the world’s best computer today can do 34 quadrillion cps, but it occupies 720 sq. m of space, costs $390m to build and requires 24 megawatts of power. (I would recommend you that you read the great article referenced here, by Tim Urban.)1

The brain’s sophistication is far, far ahead of the computers, considering all the miraculous things it can do. It is a giant neural network—capable of massively parallel processing—simultaneously collecting and processing huge amounts of disparate data. I’m tapping away on a laptop savoring the smell and taste of coffee while listening to music on a cold cloudy day in a warm cafe surrounded by art. The brain is simultaneously assimilating the olfactory, visual, aural, haptic, and environmental signals, without even trying too hard.

What does this tell us about the future of jobs? It would appear therefore that we are decades away from computers, which can replace brain functions and therefore, jobs. Let’s look at this a little more closely though.

The exponential trajectory of computers and software will probably lead to affordable computers with the capacity of a human brain arriving by 2025, and more scarily, achieving the computing capacity of all humans put together by 2040. Note, this is looking at computing power alone, which is distinct from intelligence and from singularity. This is made possible by any number of individual developments and the collective effort of the computer science and software industry. Kevin Kelly2 points to three key accelerators, apart from the well-known Moore’s law. The evolution of graphics chips, which are capable of parallel processing—leading to the low-cost creation of neural networks; the growth of big data, which allows these ever more capable computers to be trained; and the development of deep learning—the layered and algorithmically driven learning process that brings much efficiency to how machines learn.

So, the hubris around the human brain may actually survive another few decades, and thereafter, the question might not be whether computers can be as good as humans but how much better than the human brain could the computer be. Initially estimated by Ray Kurzweil to be around 2045, but now believed by many futurists to be closer to 2060, the point of singularity is the point when computers actually surpass humans in intelligence, and set itself off on a path of exponential improvement. But that has been well argued and no doubt will be so again, including the moral, ethical, and societal challenges it will bring.

I actually want to look at the present and sound a note of warning to all those people still in the camp of human brain hubris. Let me start with another compliment to the brain. Consider this apocryphal discussion between two friends meeting after ages.

A: How have you been? What are you doing nowadays?

B: I’m great, I’ve been playing chess with myself for ages now.

A: Oh? How’s that? Sounds a bit boring.

B: Oh no, it’s great fun, I cheat all the time.

A: But don’t you catch yourself?

B: Nah, I’m too clever.

One of the most amazing things about the brain is how it’s wired to constructively fool us all the time. We only think we’re seeing the things we are. In effect, the brain is continuously short-circuiting our complex processing and presenting simple answers. This is brilliantly covered by Kahneman3 and many others. Because, if we had to process every single bit of information we encounter, we would never get through the day. The brain allows us to focus by filtering out complexity through a series of tricks. Peripheral vision, selective memory, and many other sophisticated tricks are at play every minute to allow us to function normally. If you think about it, this is probably the brain’s greatest trick—in building and maintaining this elaborate hoax that keeps up the fine balance between normalcy and what we would call insanity. Thereby allowing us to focus sharply on specific information that needs a much higher level of active processing.

And yet, put millions of all of these wonderful brains together, you discordant politics, bad electoral choices, wars, environmental catastrophe, stupidity at an industrial scale, and a human history so chockfull of poor decisions that you wonder how we ever got to here. You only have to speak with half a dozen employees of large companies to collect a legion of stories about and how the intelligence of organizations is often considerably less than the sum of the parts. There are plenty of tales about the smart individuals at Kodak who had actually created digital camera earlier than almost anywhere else. It would be fair to say that we haven’t yet mastered the ability to put our brains together in any kind of reliably repeatable and synergistic way. Very much in trial-and-error mode.

This is one of the killer reasons why computers are soon going to better than humans. Computers have been designed to network, to share, pool, and exchange brain power. We moved from the original mainframe (one giant brain), to PCs (many small brains), to a truly cloud-based and networked era (many connected brains working collectively, much, much bigger than any one brain). One of the most obvious examples is blockchain. Another is in the example of the driverless car. Now, most of you might agree that, as of today, you would rather trust a human—(perhaps yourself) rather than a computer at the wheel of your car. And you may be right to do so. But here are two things to ponder. Your children will have to learn to drive all over again, from scratch. You might be able to give them some guidance, but realistically maybe 1 percent of your accumulated expertise behind the wheel will transfer to your kids, from your thousands of driving hours. Let’s assume you hit an oil slick on the road and almost skid out of control. You may, from this experience, learn to recognize oil slicks, deal with them better, perhaps learn to avoid them or slow down. Unfortunately, only one brain will benefit from this—yours. Every single person must learn this by experience. When a driverless car has a crash today because it mistakes a sky white truck against a bright sky, it also learns to make that distinction (or is made to). But importantly, this upgrade goes to every single car using the same system or brain. So, you are now the beneficiary of the accumulated learning of every car on the road that shares this common brain. Can you imagine the explosive rate of that learning?

Kevin Kelly talks about a number of different kinds of minds/brains that might ensue in the future, that are different from our own. If, like the airline industry, automotive companies agree to share this information—following every accident or near-miss, then you create a similar superbrain and start to get the benefit of every car on the road, irrespective of the manufacturer. Can you even compute how quickly your driverless car would start to learn? Nothing we currently know or can relate to prepare us for this exponential model of learning and improvement.

The human brain also betrays us in a number of ways. The quality of your training, upkeep, and performance management of the brain varies dramatically from person to person. Here are some ways where we’re already behind computers, and I’m just going to consider one activity, driving:

Computation

The most obvious one, our computational abilities are already infinitesimally small, even compared to the average pocket calculator. This is obvious, but when you apply it to say, calculating the speed of braking to ensure you stop before you hit the car that’s just popped out in front, but not so fast that you risk being hit by the car behind you, you’re already no match for the computer. Jobs that computers have taken over on the basis of computation include programmatic advertisement buying and algorithmic trading.

Memory

Almost as apparent as the first point, nobody needs to be told that computers are better at remembering than we are. Although data stored in a drive does undergo degradation over time, in general, you would trust a computer to retain an overwhelmingly large amount of data. And as importantly, free from any cognitive biases. Human memory is, after all, incredibly selective! What computers lack is the ability to connect ideas in the way that human brains can. If you burnt your finger on a hot iron when you were five years, some part of your brain warns you while reaching for a completely different kind of heating appliance 50 years later. Even though you might have forgotten the original incident explicitly.

Sensing and Observation

Would you know if the grip on your tires has dropped by 10 percent? 5 percent? What if your engine is performing suboptimally, or if your brakes are 3 percent more loose than normal? Have you ever missed a speed limit sign as you come off a freeway or motorway? Have you ever realized with a fright that there was something in your blind spot? A computer, armed with sensors all around the car, is much less likely to miss an environmental hazard, vehicular data point, or road sign than you are. All this is before we factor in distractions, or less than perfect eyesight and hearing, and just unobservant driving. Other observation-based professions include security and flight navigation, where computers are already at work.

Reaction Time

Any driving instructor will tell you that the average reaction time is a tenth of a second for humans. In other words, at 40 mph, you will have covered 17 meters before your brain and body start to react. By the time you’ve actually slammed the brakes or managed to swerve the car—you may well be 20 to 25 meters down. By contrast, there is already evidence of autonomous vehicles being able to pre-empt a hazard and slow down. Even more so, if the crash involves another car using the same shared brain.

Judgment

The problem with our brains is that we rarely allow them to work to their potential. In the United States, in 2020, over 38,000 people were killed in traffic accidents. The top four causes of crashes in the United States are distracted driving, drunk driving, speeding, and reckless driving. The underlying causes may be stress, anger, tiredness, alcohol, or mobile phone distraction. The bottom line is that our emotional state dramatically impacts our judgment. And yet, we often use judgment as a way of bypassing complex data processing. Invaluable where the data doesn’t exist, or the time available is too limited. But with the increasing quantification of the world, we may need less judgment and simply more processing, such as the Hawk Eye system in tennis.

Training

How long did it take you to learn to drive? A week? A month? Three? How long did it take you to be a good driver? Six months? This process will need to be repeated each time for each person. The collective cost is huge, and linear. Computers, as we’ve pointed out, can share learning via a giant virtual brain. Extending this analogy, in a number of jobs, automation will over time reduce training costs dramatically. This can include front desk operations, call centers, retail assistants, and many more. The time to train an AI has already gone from years to weeks.

Tip: Think of any task, and then think of all the reasons why any two people might have different levels of capability for that same job. How many of these differences are human shortcomings that an AI system would not be encumbered by?

While we should agree that the human brain is awe inspiring for all it can do, it’s also important to recognize its many limitations. Besides, the human brain has had an evolutionary head-start of some six million years. And the fact that we’re having this discussion suggests that computers have reached some approximation of parity in about 60 odd years. Therefore, we shouldn’t be under any illusion about how this will play out going forward. A last cautionary point—the various cognitive functions of the brain peak at different points of our lives—some as early as in our 20s and some later. But they do peak, and then we’re on our way down!

So, AI represents a collective intelligence. It should not be benchmarked or bounded by human intelligence. It can already do a few things better than human brains, and definitely more consistently than humans. Fortunately, for most industries, there should be a significant phase of overlap during which computers are actually used to improve our own functioning. Our window of opportunity for the next decade is to become experts at exploiting this help.

At the same time, it is undoubtedly true that as of today, the best AI platforms perform well on incredibly narrow domains, which are either closed (such as games of chess and Go). And AI is far from perfect, so it’s critical that we evaluate the cost of errors. Two key principles should apply here. The first is, what is the error expectation—that is, the probability of errors multiplied by the cost. The probability depends on the quality of the AI and the training data. Errors are likely to occur when the AI encounters data sets it has not been trained on. For autonomous vehicles, the cost of error is very high—in human lives. While the cost of error for an autonomous vacuum cleaner may be relatively low. The second principle is the comparison with human performance. The AI may not be perfect but does it improve on the current human performance benchmark? This is more ethically fraught than you would think. Imagine that in a world where all cars were autonomous, the United States registered 20,000 deaths per year through fatal accidents. This is significantly lower than the 38,000 deaths registered in 2020. However, if the accidents involving autonomous cars are of the kind that can easily be avoided by humans, such as mistaking a bus for an open road, or not recognizing a trailer as a moving vehicle, then they will always feel like unnecessary scenarios, which human drivers would easily have avoided. In essence, this is a form of a trolley problem—of choosing one set of mistakes over the other. Even if the macro numbers are better, every mishap will be a result of this choice. That’s why, we need to tread carefully.

The Journey to AI

AI will find its way into every industry and almost every interface. Education, automotive, chess playing, voice assistants, customer services, these are just some examples. But what is the journey to AI? And what areas does it traverse? Let’s look at the following picture.

The axes I’ve used in this example are (a) whether the computer is taught by humans or whether it learns on its own, which I’ve already explained earlier, and (b) whether it does things we can do or things it can’t do. A quick note on the second axis. A lot of the reasons for using AI in business are to replace expensive or error-prone humans by cheaper and more consistent software. This includes areas of customer service and claims processing, for example. Or even early-stage driverless cars and making movie trailers. But the much larger and potentially unbounded opportunity lies in doing things that humans simply can’t do, such as microsurgery and Mars exploration.

Almost all software historically have been about teaching computers to do what we can do. And this is where AI also starts. When we teach cars to drive, we do so with the underlying belief that we are only teaching them what we can do. I call this the zone of condescension. You can imagine why. It assumes that humans are better and computers can only do what we teach them to do.

However, once we start this journey, the technology evolves quite rapidly. For instance, there are already things that we have taught the car to do that we can’t—for example, have a 360-degree vision. Or sense road conditions for ice. And communicate with other cars, for example. This is the zone of pride, in the way we might treat a child who we have taught, but who surpasses our own capabilities.

When we allow the software in the car to learn on its own, however, we enter a different zone. We may teach it to recognize traffic lights and deal with them accordingly. This is the zone of indulgence. The car has learned to do something that we can already do. We are happy for the car, but again, this is like watching our child learn to walk. Importantly though, the car is now learning on its own, so it can soon deal with a million different types of traffic lights.

But pretty soon, the AI goes into the zone where it’s learning by itself to do things that we can’t. For example—it may run transport networks at speeds that are impossible for humans to run safely. Or find ways to spot and fight cancer. I call this the zone of awe. Not just because of the kind of problems it can solve, because this zone is unbounded, but also because it could even decide which problems it chooses to solve. This is the gray area of singularity, of the fear of takeover by computers. But there is an ocean of useful purposes it can be put to. For the record, there isn’t one single path—different types of AI can take different routes through this landscape.

images

Figure 19.1 The journey to AI

Multisensory Environments

Perhaps the word is supersensory—that is, all the ways in which machines can sense beyond human capability. Here are some: human beings need light to see. Machines can use night vision, and heat sensing to see in the dark. A human can only process visual signals from one pair of eyes. A machine could have hundreds of eyes producing visual signals, which can be distributed geographically, across building, or a city, or even the world. A car can easily have 360-degree vision. By extension, machines can ingest any number of sensory inputs—sounds, temperature, sight, or smell, and it can do so remotely. A new satellite called Surface Water and Ocean Topography (SWOT) has recently been launched by NASA and the French space agency CNES. SWOT will be critical to monitoring the earth’s fresh water. Monitoring all the earths fresh water is not a trivial task, but sensing via satellite helps!

The Role of Data

A brand-new AI system is like a human baby—with a great brain but zero ability. The baby evolves at an amazing pace by continuously absorbing signals from her environment delivered through her senses. The nascent AI brain needs to be similarly trained, exposed to a stream of data and contexts, and allowed to build up a base of competence and decision capability. The thing is, its ability to process and speed of learn is hundreds of times faster than the human brain, so it can, in a short while, do things in its narrow field, that an adult would struggle to do. Think of the chess computer playing millions of games against itself in a day. The availability of abundant data, and the evolution of tools to handle ever-larger quantities of data, is one more key enabler of the AI era. This is also why you will increasingly hear about synthetic data or the ability of AI systems to generate their own training data.

The Future of AI

We are still in the world of narrow intelligence—where AI engines are trained for specific tasks and are useless at others. For instance, a self-driving car’s AI software may be amazing at negotiating rush hour traffic but would be useless if it were asked to perform customer service in a call center, or help with insurance claims processing. This is likely to continue for a while until we get to general AI, which more closely represents the human mind—where the intelligence can be pointed to any problem domain, with some effectiveness. The big question at that point is what happens after that. Because there is no reason for AI to stop evolving to levels far beyond human capabilities. Tim Urban’s piece refers to this as super-intelligence—an IQ 100-fold superior to humans.

The Future of Jobs

Understandably, there is a lot of concern about the future of white-collar work in a world of AI. There are those who feel that this is the beginning of the end for work as we know it. There are two key strategies, I think, we as managers need to adopt to deal with this possible obsolescence.

The first is largely good news. Before AI starts to replace managers, there will be any number of roles where they enhance and augment us. Broadly speaking if a job is very routine and mechanical, the chances of being replaced by a software program, and not a particularly sophisticated one, are quite high. However, for doctors, drivers, and a host of other professions, in the short term, there could well be good news. The reason is very simple, humans are quite poor at a lot of tasks and AI will make us better. Take driving for example. Before AI displaces all drivers, we are likely to see a phase where AI works alongside drivers, and in selected environments, and we see a sharp decline in road accidents and fatalities. Similarly, AI should help doctors, and customer care professionals dramatically improve their output before they start replacing people. During this phase, we need to be able to be very good at using AI to improve our own performance.

The second phase will definitely be one of replacement. This will coincide with another boost in productivity—however, in this case, it may mean a boost by reducing the actual amount of human inputs for the same end goal. The problem with capitalism is that the benefits of this labor productivity don’t go to the displaced workers. They go to the owners of the technology—that is, capital. This means that the only way in the current environment to benefit from the AI surge is to be on the right side of AI—in other words, you need to be owning, building, or investing in AI. There will be a range of new jobs that are created by AI, but usually, those jobs will require new skills and re-education, it won’t be easy for a professional driver to participate in creating AI for self-driving cars. Nor will it be any easier for doctors, lawyers or marketing managers, unless you start early.

A note of caution: this is not as linear a journey as the earlier paragraph makes it seem. A recently released MIT report on the Future of Work calls out three ways in which technology impacts jobs. First, it enhances productivity, which releases people to do deliver more output by using better tools. Second, the overall improvement in productivity boosts income across the board, which leads to higher spends, and more goods and services being created and consumed. And third, by directly creating new jobs—in areas such as training AI algorithms. Each of these comes with their specific challenges, though. Productivity increase often translates into reduction of employees in large firms—thousands of automation programs are justified on the basis of the headcount reduction. Income increases, as we know from the same MIT report, are increasingly skewed in favor of the already wealthy, who have a lower marginal spend per additional dollar earned. And finally, the new jobs generated by emerging tech are often geographically removed from the old jobs, and require a different skill set. A factory worker in a Detroit production line does not have access to the software job in Germany or China that is now delivering the same component through a robotic process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset