♦   8   ♦

Robotics and Biology: The Inevitable Merging of Man and Machine

As a child, I believed that by the time I grew up, we would all have robots like Rosie, from The Jetsons, cleaning up after us. In this 1960s cartoon show, Rosie is the domestically adroit robot maid of a family, the Jetsons, in the year 2062. The on-demand economy appealed to my juvenile sensibilities: why should anyone waste time doing dishes or folding clothes? And I wasn’t very popular in school; I didn’t have many friends. So I longed for a droid friend like C-3PO, Luke Skywalker’s robot buddy from Star Wars.

Rosie never arrived. Just after the turn of the century, I got a Roomba, an automated vacuum cleaner that goes round and round and gets stuck on rug fringes and wedges itself into corners. Even now, the nearest things to C-3PO on the mass market are A.I. assistants such as Siri, Google Home, and Amazon Alexa.

In fact, scientists and technologists have found that some of the hardest things to teach a robot to do are the very things that we learn soonest, and even skills that seem to be innate to us. In 2008, UC Berkeley roboticist Pieter Abbeel started building a robot, BRETT (an acronym for Berkeley Robot for the Elimination of Tedious Tasks). The first tedious task Abbeel started BRETT on was folding laundry; but he and his team quickly realized that teaching a robot to fold laundry was going to be harder than they had envisaged.

A robot finds it remarkably difficult to figure out what is going on in a pile of laundry. Towels, socks, and pants are jumbled together haphazardly, making every pile of laundry uniquely complex. Abbeel’s team spent months studying laundry, holding towels up in the air, and taking pictures of how unfolded and folded towels sat in baskets. “Can you use multiple images to build a 3-D model of the current shape?” Abbeel asked in an NPR Planet Money podcast.44 “Because once you can do that, then you can analyze that 3-D shape [and] find where the corners are.”

With years of effort, Abbeel’s team built software that allowed BRETT to fold a towel in twenty minutes. With practice and greater computing speed, BRETT cut that time down to ninety seconds. But unexpected objects in the hamper—such as a balled-up t-shirt—can grind BRETT to a halt. As Abbeel said on the podcast, “Once you start working in robotics you realize that things that kids learn to do up to age ten . . . are actually the hardest things to get a robot to do.”

Training a robot to walk up a ladder, open a door, or fold laundry is considerably harder than training it to read x-rays, search through legal briefs, or write sports articles. That is because robots struggle to perform tasks—even tasks that humans take for granted—that are without explicit rules. If I asked you to fold that towel, you would know what I meant. But there are a million ways to fold a towel. And the act of folding comprises numerous steps, many of which are also hard to describe to a machine. “Grab the two corners of the towel” assumes that the robot can distinguish between a corner and an edge, and between a towel and a sock or a pair of underpants. In a cluttered laundry basket full of randomly piled clothes and fabrics, these are not easy distinctions for a robot to draw.

In fact, scientists have finally built robots that can fold laundry—but they are nothing to be excited about. One, the FoldiMate, requires humans to attach clips to mark the key points in the clothing. The other, the Laundroid, will cost tens of thousands of dollars. Neither was for sale at the time of writing, and there is some skepticism as to how well they will work.

In this light, we do seem far from building robots that can converse with us, help us keep our homes clean, or perform unstructured tasks. If you watched the videos of the robots from the last DARPA Robotics Challenge, you might believe we’ll never see Rosie in real life.45 The robots were required to navigate an eight-task course simulating a disaster zone. Tasks included driving alone, walking through rubble, tripping circuit breakers, turning valves, and climbing stairs. Products of concerted efforts by the world’s best roboticists, the robots were slow and clumsy: they moved at the speed of molasses and kept falling over. To date, no robots are commercially available that can perform tasks such as folding laundry, organizing a closet, or cleaning and restocking a bathroom—tasks that we humans consider merely a chore.

But Rosie Is Coming

Consisting most importantly of processors and electronic brains, robots follow the general exponential performance improvement that Moore’s Law describes. They are essentially hardware that is controlled by software—which is now the narrow A.I. that I described earlier. As the software gets better, the robots’ movements become more stable and they can communicate more effectively.

The reason we (so recently!) laughed at the way the robots kept falling down in the DARPA challenge and began to believe that Rosie would remain an object of science fiction is that exponential technologies can be deceptive. Developments are very slow at first, but then disappointment turns into amazement. That is what I believe we will experience in robotics in the 2020s. Amazing progress is being made in the underlying hardware and software, in part because costs have plummeted. The single-axis controller, a core component of most robots’ inner workings, has fallen in price from $1,000 to $10. The price of critical sensors for navigation and obstacle avoidance has fallen from $5,000 to less than $100. And the software—the A.I. that I described in chapter 5—is advancing on a similar exponential curve.

In the DARPA Grand Challenge of 2004 for autonomous vehicles, no self-driving car came close to finishing the course. Just eleven years later, self-driving cars are legal in more than a dozen states and are a common sight on the streets of the Bay Area of San Francisco. Incidentally, three teams, with three different designs, completed DARPA’s 2015 Challenge course.

In voice recognition, robots are already close to attaining the capabilities of C-3PO. Apple, Amazon, and Google do decent jobs of translating speech to text, even in noisy environments. Their voice-recognition systems struggle with accents, words difficult to pronounce, and colloquial abbreviations, but they are, in the main, quite serviceable. Though no A.I. bot has passed the Turing Test—the gold standard of A.I., whereby humans are unable to distinguish a human from a robot in conversation—the machines are getting closer. Siri and her compatriots will soon be able to converse with you in complex, human-like interactions.

Still, machines have yet to crack voice recognition in more complicated, multi-voice environments, where the task involves recognizing the voice communications of several humans simultaneously in a loud environment. This is a far more difficult task than single-voice recognition and illustrates the sophistication of processing in our brains.

The computational demands of these difficult tasks would seem to be insurmountable obstacles to robots whose development is linear. But, as I explained earlier, by 2023 your iPhone will have computational speed equivalent to that of a human brain. It is becoming clear that, given A.I.’s continuing exponential progress, robot builders’ and A.I. coders’ present computational obstacles are on the verge of becoming irrelevant.

The rise of machine learning, too, heralds a generation of robots that can learn through doing and that will become smarter as they spend more time with us. Google has demonstrated real-time text- and voice-translation software, built in part with human input through Google Translate. Google’s DeepMind system, which beat the world’s leading Go player in 2016, learned to play this millennia-old board game, orders of magnitude more complicated than chess, by watching humans play Go.46 Even more fascinating, DeepMind surprised human Go experts with moves that, at first glance, made no sense but ultimately proved innovative. The humans taught the robot not just to play like a human but how to think for itself in novel ways. Though not passage of a Turing Test, this is a clear sign of emergent intelligence, distinct from human instruction.

For all of these reasons, I expect that a robot maid—a robot like Rosie—will be able to clean up after me by 2025. Robots will soon become sure-footed; and a robot will, rather than merely open a door, succeed in opening it while holding a bag of groceries and ensuring that the dog doesn’t escape. When I buy Rosie, I may have to show her around the house, but she’ll quickly learn what I need, where my washer and dryer are located, and how to navigate around and clean the bathroom. And I expect that she will be as witty and lovable as she was on TV. No, she won’t have the artificial general intelligence that will make her seem human, but she will be able to have fun conversations with us.

In fact, a very limited version of Rosie can be found at hospitals around the country. Her name is Tug, and she is produced by Aethon Inc. of Pittsburgh. Tug performs the most essential duties of today’s hospital orderly, such as delivering medications and equipment to different floors. Tug costs considerably less than the orderly position she replaces.

But Tug doesn’t clean up rooms or do anything more complex than navigate the hospital corridors. The idea of robots replacing humans wholesale and quickly is unrealistic. Rather, the robots will replace humans piecemeal in performing tasks, through specializations.

In this fashion, the robots will gradually, task by task, assume the jobs of humans in manufacturing plants, in grocery stores, in pharmacies. Hospitals rely on A.I.-driven systems in their pharmacies right now to spot potential problems due to conflicting medicines. I can envisage the job of pharmacist being completely automated. Further down the economic food chain, McDonald’s is in the process of rolling out automated order-taking at its counters. This could be matched by an automated engine to cook hamburgers and fries. One of these already exists. It’s from a venture-backed company called Momentum Machines and can make a hamburger every ten seconds. That may sound ominous; yes, robots may eat our jobs. But, in the rapidly aging developed world, we may need robots to take care of our aging populations and maintain economic stability in the face of staggering demographic change that will leave us collectively with far more work to do than workers to do it can accomplish.

How Robots May Save the World

According to the U.S. Census Bureau between 2020 and 2030, for the first time in human history, the global population of people older than sixty-five years will eclipse the population of people under the age of five.47 The trend is moving faster in developed countries but is also accelerating even in less-developed countries. According to the Population Reference Bureau, the proportion of people aged sixty-five or more in less-developed countries has increased by 50 percent since 1950, from 4 percent to 6 percent.48 In more-developed countries, the proportion of people aged sixty-five or more, which was 8 percent in 1950, increased to 16 percent in 2014 and should rise to a record 26 percent by 2050. In those developed countries, the rate of childbirth is declining.

This represents a trend toward an aging populace, with fewer and fewer people supporting, both through health care and through taxation, the ballooning ranks of the retired and aged. In a country such as Japan, the ratio of retired people to working adults will continue to soar beyond its current record. That could tax the country as nothing has before: caring for an aging population could overwhelm all other economic and political priorities. For this reason, the Japanese government has embraced the concept of robotic caregivers for the elderly.

Japan’s robot solution to its unavoidable aging crisis calls up a future that makes many uneasy, in which the aged are wholly reliant on robots and in which people no longer invest significant energy in caring for their parents. This sits uncomfortably with deeply held societal ideals about filial piety and blood duty to take care of those who protected and fed us when we were small. But such may be the tradeoffs required to sustain an economy in which fewer and fewer people of working age are deployed to productive occupations in society. At its core, too, this touches on the highly emotional topic of whether we are better off ceding nearly all work to the robots.

The robots may become not only a saving grace for an aging developed world—but also our best friends.

Should Robots Kill People?

Japan may favor robots to protect its elderly and preserve its economy, but a far more contentious discussion, concerning the use of robots for destructive purposes, is under way right now with tremendous implications for humanity. The debate centers on whether we should allow robots powered by A.I. to kill people autonomously. More than 20,000 people signed an open letter in July 2015 that called for a worldwide ban on autonomous killing machines. A thousand of these signatories were A.I. researchers and technologists, including Elon Musk, Stephen Hawking, and Steve Wozniak.49 Their logic was simple: that once development begins of military robots enabled to autonomously kill humans, the technology will follow all technology cost and capability curves; that, in the not-so-distant future, A.I. killing machines will therefore become commodity items, easy to purchase and available to every dictator, paramilitary group, and terrorist cell. Also, of course, despotic (or even wayward democratic) governments could use these machines to control and cow their populations.

Yet the latest talks on “lethal autonomous weapons systems” wrapped up in August 2018 with no agreement on a ban. The Group of Governmental Experts, convened in Geneva under the auspices of the United Nations Convention on Certain Conventional Weapons, agreed only to continue discussions the following year.50 So all the major military powers are pushing ahead with plans to build killer robots, with the United States as a vocal proponent. The U.S. Department of Defense even argues that A.I. and robotics will make military operations more precise and better able to uphold codes of conduct and rules of engagement.

That fully autonomous killing machines are a bad idea is a viewpoint that almost everyone outside of the military and a handful of government proponents can agree on. Even Ray Kurzweil, who is about as prorobot as you can find, is staunchly opposed to programming robots to kill people without asking permission from a human controller. Such programming would, he believes, be a moral violation. Other critics, such as AJung Moon, cofounder of the Open Roboethics initiative, fear that allowing autonomous lethal force will tip us down the slippery slope toward a world in which the machines could act autonomously beyond the intent programmed into them.51 And, as DeepMind demonstrated on the Go board, robots made smart enough will likely have minds of their own, at least within the rules and environment they have mastered.

The military supporters of autonomous lethal force argue that robots in the battlefield might prove to be far more moral than their human counterparts. A robot programmed not to shoot women and children would not freak out under the pressure of battle. There would have been no Mai Lai Massacre if the robots had been in charge, they say. Furthermore, they argue, programmatic logic has an admirable ability to reduce the core moral issue down to binary decisions. For example, a robot might decide in a second that it is indeed better to save the lives of a school bus full of children over the life of a single driver who has fallen asleep at the wheel.

These lines of thought are interesting and not wholly unwarranted. Are humans more moral if they can program robots to avoid the weaknesses of the human psyche and the emotional frailty that can cause even the most experienced military man to temporarily lose his sense of reason and morality in the heat of battle? Where it is hard to discern whether the opponent follows any moral compass, such as in the case of ISIS, is it better to rely on the cold logic of the robot warrior rather than on an emotional human being? What if a non-state terrorist organization devises lethal robots that have a battlefield advantage? Is that a risk that we are willing to take by developing them?

I have a cynical view on this: I do not think the public really cares much about whether robots will be allowed to kill people, the notion seeming too abstract. The American public has never taken much interest in whether drones should be equipped for autonomous kill shots. In fact, the public has taken little interest in the question of robots being used to kill people even in the United States. In Dallas, police used a bomb carried on a robot to kill Micah Brown, the shooter who had allegedly killed seven officers at a protest rally.52 Few questioned this use of force. And the first autonomous robots’ use on the battlefield would likely be far away, as the battlefields on which drones first killed were, in Afghanistan and Pakistan.

The Open Roboethics initiative is advocating an outright ban on autonomous lethal robots, a call echoed by nearly every civil rights organization and by many politicians. The issue will play out over the next few years. It will be interesting to see not only what final decision comes from world governance bodies such as the United Nations but also the decision of the U.S. military establishment and its willingness to sign an international accord on the matter. (The United States is a consistent holdout on treaties restricting military technology, a realm in which the nation has so far held a clear global advantage in the post–Cold War era.)

Do the Benefits Outweigh the Risks?

So now we come down to the question at hand. Do the benefits of robots outweigh the risks? If so, how do we mitigate the risks? Stopping altogether the penetration of robots into society and the world is by now a lost cause. Tug is not going back in the box. Google cars—robots that drive vehicles for us—are here and will probably not be stopped. Tesla cars with autopilot capabilities have already driven close to 2 billion miles according to an estimate by MIT researchers.53 And as A.I.-endowed robots advance, inevitably emergent capabilities will result in things we have not expected. The extreme risk is apocalyptic: the robots become smarter than we are and take over the world, rendering humans powerless on their own planet.

An equally troubling but less existential, and more realistic, risk is that the robots increasingly deprive us of our jobs. Some researchers, such as Erik Brynjolfsson and Andrew McAfee of the Massachusetts Institute of Technology, see the automatons inevitably gobbling up more and more meaningful slices of our work.54

Oxford University researchers Carl Benedikt Frey and Michael A. Osborne caused a tremendous stir in September 2013, when they asserted in a seminal paper that A.I. would put 47 percent of current U.S. employment “at risk.”55 The paper, “The Future of Employment,” is a rigorous and detailed historical review of research on the effect of technology innovation upon labor markets and employment. In a recent research paper, McKinsey & Company found that “only about 5 percent of occupations could be fully automated by adapting current technology. However, today’s technologies could automate 45 percent of the activities people are paid to perform across all occupations. What’s more, about 60 percent of all occupations could see 30 percent or more of their work activities automated.”56

The report also notes that the mere ability to automate work doesn’t make it a sensible thing to do. As long as $10-per-hour cooks are cheaper than Momentum Machines on fast-food lines, it’s unlikely that food-service jobs will succumb to automation.

The alternative extreme—no robots—is simply not realistic. The giant bubble of aging people could overwhelm most of the developed world, as well as many developing countries, such as China. Self-driving cars will save tens of millions of lives over the next decades. More agile and intelligent robots will take over the most dangerous human tasks and jobs such as mining, firefighting, search and rescue, and inspecting tall buildings and communications towers.

To put this in perspective, it seems we want robots that can do everything humans have trouble doing well as long as the robots don’t capture our most unique capabilities or become much smarter than we are. Perhaps this is a good thing. A robotic caregiver, which on the surface may seem a heartless option, is compassionate compared with providing no caregiver or placing a child under harsh financial pressures. And, taking this argument further: perhaps the robots and the economic gains they allow will take our jobs but also provide us humans with huge gains in free time to pursue our passions.

To me, the crux of this matter will be maintaining the ability of humans to understand robots and stop them from going too far. Google is looking at building in a kill switch on its A.I. systems.57 Other researchers are developing tools to visualize the otherwise impenetrable code in machine-generated algorithms built using Deep Learning systems. So the question that we must always be able to answer in the affirmative is whether we can stop it. With both A.I. and robotics, we must design all systems with this key consideration in mind, even if that reduces the capabilities and emergent properties of those systems and robots.

Will All Benefit Equally?

On the question of whether the technology will benefit everyone equally by the availability of general-purpose robots, the answer is no: the rich will surely benefit more than the poor—because they will get the latest and greatest robots first. Note the differences between the smartphones we use here in the United States and those common in India and China. Our devices often cost more than $600, and theirs are as cheap as $30. We have the fastest processors, the longest-lasting batteries, and the best screens. Their devices are two to three years behind ours in features and function. But, frankly, I don’t see this as a problem, because effectively the rich are subsidizing the poor, paying for the technology advances. This was the same strategy that Elon Musk used in building the Tesla: with the Roadster and Model S, he targeted the high-end market and had people like me pay for the advances that led to a more-affordable Model 3.

Fans of the TV series The Jetsons may also recall the first episode, when Jane Jetson bought Rosie because she was an “old demonstrator model with a lot of mileage.” She was the only robot that this middle-class family of the future could afford. Rosie may not have had the latest features, but she provided tremendous benefit. That is the future we are looking at with robots—one in which everyone will eventually benefit, although some may get the advanced models earlier than others.

The worrisome issue when considering social equity is, as I discussed earlier, the disruption that robots will create in employment. This will have serious consequences unless we develop the safety nets, retraining schemes, and social structures to deal with an era of abundance and joblessness. We need to help people adapt to a new social order in which our status isn’t just based on the job we do, but based also—and perhaps more—on the contribution we make.

And, no matter what governments do, they will not be able to prevent automation, because it is a key to economic growth. There is no way we can sugarcoat the impact of technology on employment; we simply need to prepare for it. We need to learn about where technology is heading, to understand its impact, and to cushion the blow to those who will feel its negative effects most.

Does the Technology Foster Autonomy Rather Than Dependence?

Will we be dependent on our robots? To some extent, we already are, on the primitive ones: our cars, elevators, dishwashers, and practically everything else that runs on electricity. We will surely have an option of not using them, but that wouldn’t make for an easy life. I can tell you that it is more than gadgets with electric cords; without my smartphone and Internet access, I feel lost. Why would our dependence upon the robots that serve us and become our friends and companions be any different?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset