12

Fully Automated Decision-Making

On December 12, 2016, Tesla Motors Club member “jmdavis” posted to a forum on electric vehicles, reporting on an experience he had had in his Tesla. While driving to work on a Florida freeway at about sixty miles per hour, his Tesla dashboard indicated a car ahead that he could not see because the truck immediately in front of him blocked his view. Suddenly, his emergency brakes kicked in, even though the truck ahead had not slowed. A second later, the truck veered into a shoulder to avoid hitting the car in front that had in fact stopped quickly because of debris on the road. The Tesla had decided to brake before the truck in front had done so, allowing jmdavis’s car to stop with plenty of room. He wrote:

If I was driving manually, it is unlikely that I would have been able to stop in time, since I could not see the car that had stopped. The car reacted well before the car ahead of me reacted and that made the difference between a crash and a hard stop. Strong work Tesla, thanks for saving me.1

Tesla had just sent a software update to its vehicles that allowed its Autopilot self-driving feature to exploit radar information to gain a clearer picture of the environment in front of the car.2 While Tesla’s feature worked when its cars were in self-driving mode, it is easy to imagine a situation where a car takes over control from a human in the event of an imminent accident. Carmakers in the United States have reached an agreement with the Department of Transportation to make automatic emergency braking standard on vehicles by 2023.3

Often, the distinction between AI and automation is muddy. Automation arises when a machine undertakes an entire task, not just prediction. As of this writing, a human still needs to periodically intervene in driving. When should we expect full automation?

AI, in its current incarnation, involves a machine performing one element: prediction. Each of the other elements represents a complement to prediction, something that becomes more valuable as prediction gets cheaper. Whether full automation makes sense depends on the relative returns to machines also doing the other elements.

Humans and machines can accumulate data, whether for input, training, or feedback, depending on the data type. A human must ultimately make a judgment, but the human can codify judgment and program it into a machine in advance of a prediction. Or a machine can learn to predict human judgment through feedback. This brings us to the action. When is it better for machines rather than humans to undertake actions? More subtly, when does the fact that a machine is handling the prediction increase the returns to the machine rather than a human also undertaking the action? We must determine the returns to machines performing the other elements (data collection, judgment, actions) to decide whether a task should be or will be fully automated.

Sunglasses at Night

Australia’s remote Pilbara region has large quantities of iron ore. Most mining sites are more than a thousand miles from the nearest major city, Perth. All employees at the site are flown in for intensive shifts lasting weeks. They accordingly command a premium in terms of wages and in the costs of supporting them while on-site. It’s not surprising that the mining companies want to make the most of them while they are there.

The large iron ore mines of mining giant Rio Tinto are highly capital intensive, not just in cost but also in sheer size. They take iron ore from the top of the ground in enormous pits a meteor impact would be challenged to replicate. Thus, the main job is hauling using trucks the size of two-story houses, not just up from the pit but to nearby rail lines built to transport the ore thousands of miles north to waiting ports. The real cost to mining companies is therefore not people but downtime.

Mining companies have, of course, tried to optimize by running throughout the night. However, even the most time-shifted humans are not as productive at night. Initially, Rio Tinto solved some of its human deployment issues by employing trucks that it could control remotely from Perth.4 But in 2016, it went a step further, with seventy-three self-driving trucks that could operate autonomously.5 This automation has already saved Rio Tinto 15 percent in operating costs. The mine runs its trucks twenty-four hours a day, with no bathroom breaks and no air-conditioning for the cabs as the temperatures soar above fifty degrees Celsius during the day. Finally, without drivers, the trucks do not need a front and back, meaning they do not need to turn around, further saving in terms of safety, space, maintenance, and speed.

AI made this possible by predicting hazards in the trucks’ way and coordinating their passage into the pits. No human driver needs to watch over the truck’s safety on-site or even remotely. And there are fewer humans around to create safety risks. Going even further, miners in Canada are exploring bringing in AI-driven robots to dig underground, while Australian miners are looking to automate the entire chain from ground to port (including diggers, bulldozers, and trains).

Mining is the perfect opportunity for full automation precisely because it has already removed humans from so many activities. These days, humans perform directed but key functions. Before the recent advances in AI, everything except prediction could already be automated. Prediction machines represent the last step in removing humans from many of the tasks involved. Previously, a human scanned the surrounding environment and told the equipment precisely what to do. Now, AI that takes information from sensors learns how to predict obstacles for clear paths. Because a prediction machine can forecast whether the path is clear, mining companies no longer need humans to do so.

If the final human element in a task is prediction, then once a prediction machine can do as well as a human, a decision-maker can remove the human from the equation. However, as we will see in this chapter, few tasks are as clear-cut as the mining case. For most automation decisions, the provision of machine prediction does not necessarily mean that it becomes worthwhile to remove human judgment and substitute a machine decision-maker, nor remove human action and substitute a physical robot.

No Time or Need to Think

Prediction machines made self-driving cars like Tesla’s possible. But using prediction machines to trigger an automatic subversion of humans for machine control of a vehicle is another matter. The rationale is easy to understand: between the moment an accident is predicted and the required reaction, a human has no time for thought or action (“no time to think”). By contrast, it is relatively easy to program the vehicle’s response. When speed is needed, the benefit of ceding control to the machine is high.

When you employ a prediction machine, the prediction made must be communicated to the decision-maker. But if the prediction leads directly to an obvious course of action (“no need to think”), then the case for leaving human judgment in the loop is diminished. If a machine can be coded for judgment and handle the consequent action relatively easily, then it makes sense to leave the entire task in the machine’s hands.

This has led to all manner of innovations. At the 2016 Rio Olympics, a new robotic camera videotaped swimmers underwater by tracking the action and moving to get the right shot from the bottom of the pool.6 Previously, operators remotely controlled cameras but had to forecast the location of the swimmer. Now, a prediction machine could do it. Swimming was just the beginning. Researchers are now working to bring such camera automation to more complex sports like basketball.7 Once again, a need for speed and codifiable judgment is driving the move to full automation.

What do accident prevention and automated sports cameras have in common? In each, there are high returns for quick action responses to predictions and judgment is either codifiable or predictable. Automation occurs when the return to machines handling all functions is greater than the returns to including humans in the process.

Automation can also arise when the costs of communication are high. Take space exploration. It is much easier to send robots into space than humans. Several companies are now exploring ways to mine valuable minerals from the moon, but they need to overcome many technical challenges. The one that concerns us here is how moon-based robots will navigate and act. It takes at least two seconds for a radio signal to get to the moon and back, so an earth-based human operating a moon-based robot is a slow and painful process. Such a robot cannot react quickly to new situations. If a robot moving along the surface of the moon suddenly encounters a cliff, any communication delay means that earth-based instructions may arrive too late. Prediction machines provide a solution. With good prediction, the moon-based robot’s actions can be automated, with no need for an earth-based human to guide every step. Without AI, such commercial ventures would likely be impossible.

When the Law Requires a Human to Act

The notion that full automation may lead to harm has been a common theme in science fiction. Even if we’re all comfortable with complete machine autonomy, the law might not allow it. Isaac Asimov anticipated the regulatory issue by opting for hard coding robots with three laws, cleverly designed to remove the possibility that robots harm any human.8

Similarly, modern philosophers often pose ethical dilemmas that seem abstract. Consider the trolley problem: Imagine yourself standing at a switch that allows you to shift a trolley from one track to another. You notice five people in the trolley’s path. You could switch it to another track, but along that path is one person. You have no other options and no time to think. What do you do? That question confounds many people, and often they just want to avoid thinking about the conundrum altogether. With self-driving cars, however, that situation is likely to arise. Someone will have to resolve the dilemma and program the appropriate response into the car. The problem cannot be avoided. Someone—most likely the law—will determine who lives and who dies.

At the moment, rather than code our ethical choices into autonomous machines, we’ve chosen to keep a human in the loop. For instance, imagine a drone weapon that could operate completely autonomously—identifying, targeting, and killing enemies by itself. Even if an army general could find a prediction machine that could distinguish civilians from combatants, how long would it take combatants to figure out how to confuse the prediction machine? The required level of precision may not be available any time soon. So, in 2012, the US Department of Defense put forward a directive that many interpreted as a requirement to keep a human in the loop on the decision whether to attack or not.9 While it is unclear if the requirement must always be followed, the need for human intervention, for whatever reason, will limit the autonomy of prediction machines even when they might operate on their own.10 Even Tesla’s Autopilot software—despite being able to drive a car—comes with legal terms and conditions that drivers keep their hands on the wheel at all times.

From an economist’s point of view, whether this makes sense depends on the context of potential harm. For instance, operating an autonomous vehicle in a remote mine or on a factory floor is quite different from operating on public roads. What distinguishes the “within factory” environment from the “open road” is the possibility of what economists call “externalities”—costs that are felt by others, rather than the key decision-makers.

Economists have various solutions for the problem of externalities. One solution is to assign liability so that the key decision-maker internalizes those otherwise external costs. For example, a carbon tax plays this role in the context of internalizing externalities associated with climate change. But when it comes to autonomous machines, identification of the liable party is complex. The closer the machine is to potential harm of those outside the organization (and, of course, to physical harm of humans within the organization), the more likely it will be both prudent and legally required to keep a human in the loop.

When Humans Are Better at the Action

Question: What is orange and sounds like a parrot?

The answer? A carrot.

Is that joke funny? Or this one: A little girl asked her father: “Daddy? Do all fairy tales begin with ‘once upon a time’?” He replied: “No, there are a whole series of fairy tales that begin with ‘If elected, I promise …’ ”

Okay, admittedly economists are not the best joke tellers. But we are better at it than machines. Researcher Mike Yeomans and his coauthors discovered that if people think a machine recommended a joke, they find it less funny than if they believe a human suggested they might like it. The researchers found that machines do a better job of recommending jokes, but people prefer to believe the recommendations came from humans. The people reading the jokes were most satisfied if told the recommendations came from a human, but when the recommendations were actually determined by a machine.

This is also true of artistic achievement and athletic competition. The power of the arts often derives from the patron’s knowledge of the artist’s human experience. Part of the thrill of watching a sporting event depends on there being a human competing. Even if a machine can run faster than a human, the outcome of the race is less exciting.

Playing with children, caring for the elderly, and many other actions that involve social interaction may also be inherently better when it is a human that delivers the action. Even if a machine knows what information to present to a child for educational purposes, sometimes it might be best if a human communicates that information. While over time, we humans may be more accepting of having robots care for us and our children, and we may even enjoy watching robot sports competitions, for the time being humans prefer to have some actions undertaken by other humans.

When a human is best suited to take the action, such decisions will not be fully automated. At other times, prediction is the key constraint on automation. When the prediction gets good enough and judging the payoffs can be prespecified—either a person does the hard coding or a machine learns by watching a person—then a decision will be automated.

KEY POINTS

  • The introduction of AI to a task does not necessarily imply full automation of that task. Prediction is only one component. In many cases, humans are still required to apply judgment and take an action. However, sometimes judgment can be hard coded or, if enough examples are available, machines can learn to predict judgment. In addition, machines may perform the action. When machines perform all elements of the task, then the task is fully automated and humans are completely removed from the loop.
  • The tasks most likely to be fully automated first are the ones for which full automation delivers the highest returns. These include tasks where: (1) the other elements are already automated except for prediction (e.g., mining); (2) the returns to speed of action in response to prediction are high (e.g., driverless cars); and (3) the returns to reduced waiting time for predictions are high (e.g., space exploration).
  • An important distinction between autonomous vehicles operating on a city street versus those in a mine site is that the former generates significant externalities while the latter does not. Autonomous vehicles operating on a city street may cause an accident that incurs costs borne by individuals external to the decision-maker. In contrast, accidents caused by autonomous vehicles operating on a mine site only incur costs affecting assets or people associated with the mine. Governments regulate activities that generate externalities. Thus, regulation is a potential barrier to full automation for applications that generate significant externalities. The assignment of liability is a common tool used by economists to address this problem by internalizing externalities. We anticipate a significant wave of policy development concerning the assignment of liability driven by an increasing demand for many new areas of automation.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset