♦   5   ♦

The Amazing and Scary Rise of Artificial Intelligence

Many of us with iPhones talk to Siri, the iPhone’s artificially intelligent assistant. Siri can answer many basic questions asked verbally in plain English. She (or, optionally, he) can, for example, tell you today’s date; when the next San Francisco Giant’s baseball game will take place; and where the nearest pizza restaurant is located. But, though Siri appears clever, she has obvious weaknesses. Unless you tell her the name of your mother or indicate the relationship specifically in Apple’s contact app, Siri will have no idea who your mother is, and so can’t respond to your request to call your mother. That’s hardly intelligent for someone who reads, and could potentially comprehend, every e-mail I send, every phone call I make, and every text I send.

That’s OK. Siri and other intelligent assistants, such as Amazon’s Alexa and Microsoft’s Cortana, are undeniably useful despite their limitations. No longer do I need to tap into a keyboard to find the nearest service station or to recall what date Mother’s Day falls on. And Siri can remember all the pizza restaurants in Oakland, recall the winning and losing pitcher in any of last night’s baseball games, and tell me when the next episode of my favorite TV show will air.

Siri is an example of what scientists and technologists call narrow A.I.: systems that are useful, can interact with humans, and bear some of the hallmarks of intelligence, but would never be mistaken for a human. In the technology industry, narrow A.I. is also referred to as soft A.I. In general, narrow-A.I. systems can do a better job on a very specific range of tasks than humans can. I couldn’t, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night.

Narrow A.I. is now embedded in the fabric of our everyday lives. The humanoid phone trees that route calls to airlines’ support desks are all narrow A.I., as are recommendation engines in Amazon and Spotify. Google Maps’ astonishingly smart route suggestions (and mid-course modifications to avoid traffic) are classic narrow-A.I. systems much better than humans at accessing information stored in complex databases, but their capabilities are specific and limited, and exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentine’s Day, she might make a snarky comment, but she couldn’t venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldn’t help.

Soon enough, though, Siri and other readily available A.I. systems will be capable of helping your children write a term paper on the Napoleonic Wars—or of just writing one from scratch. Siri and her ilk will create music, poetry, and art. In fact, they are already learning how to do so.

In September 2011, in Malaga, Spain, a computer named Iamus (the name deriving from that of a Greek God with the power to understand the voices of birds) composed a trio for clarinet, violin, and piano, titled Hello World!, scoring it in traditional musical notation.19 Iamus was an embodiment of melomics (like genomics but arising from melody), a software system that takes a guided evolutionary approach to music composition. Exposed to centuries’ worth of music and digital scores, the artificial autonomous composer accumulates knowledge of music much as a human composer would.

Over some years, Iamus’s programmers taught the system the core rules of music composition—for example, that piano chords of more than five notes cannot be struck simultaneously with a single hand. They did this using a combination of human coding and machine learning—which is a key concept in computing that employs algorithms to learn rules and build sophisticated system models from existing data. At its core, music is data, with musicians its interpreters.

According to Iamus’s programmers, roughly a thousand rules are now hard-coded to help the machine compose beautiful music. But, rather than see it as replacing the human hand, its creators envisage melomics as a tool with which to enhance and accelerate creativity. It will enable composers to shape compositions by changing rules or guiding an algorithm in new directions, rather than painstakingly plot a composition point by point. Equally excitingly, with a simple interface and the guidance of computers, melomics will enable anybody to compose pleasing music.

Ultimately, powerful computational systems—Siris on steroids—will reason creatively to solve problems in mathematics and physics that have bedeviled humans. These systems will synthesize inputs to arrive at something resembling original works or to solve unstructured problems without benefit of specific rules or guidance. Such broader reasoning ability is known as artificial general intelligence (A.G.I.), or hard A.I.

One step beyond this is artificial superintelligence, the stuff out of science fiction that is still so far away—and crazy—that I don’t even want think about it. This is when the computers become smarter than us. I would rather stay focused on today’s A.I., the narrow and practical stuff that is going to change our lives. The fact is that, no matter what the experts say, no one really knows how A.I. will evolve in the long term.

How A.I. Will Affect Our Lives—And Take Our Jobs

Let’s begin with our bodies. Already, a number of A.I. systems have proven to be better than human doctors at identifying breast cancer or pneumonia in some circumstances. According to an “A.I. vs Doctors” tracker compiled by the IEEE (Institute of Electrical and Electronics Engineers), as of 2018, doctors are clear winners against A.I. in only one skill—general diagnosis—and are somewhat ahead of A.I. in the skills of microscopy and heart scans.20 But A.I. has clearly bettered doctors in predicting or diagnosing pneumonia, heart attacks and strokes, autism, and nail infections, and has exceeded the capabilities of doctors to some degree in certain aspects of surgery. A.I. and doctors tied in diagnosing skin cancer and breast cancer, albeit by narrow margins. In crucial tests of pattern matching and physical agility and stability, A.I. won clear victories.

A.I. technologies will also be able to analyze a continual flow of data on millions of patients and on the medications they have taken to determine which truly had a positive effect on them, which ones created adverse reactions and new ailments, and which did both. This ability will transform how drugs are tested and prescribed. In the hands of independent researchers, these data will upend the pharmaceutical industry, which works on limited clinical-trial data and sometimes chooses to ignore information that does not suit it.

The bad news for doctors is that we will need fewer of them. Famed venture capitalist Vinod Khosla estimates that technology will replace 80 percent of doctors.21 Physicians have so far largely resisted this interpretation. After all, there is a doctor shortage in many fields, and it’s not as if doctors are at risk of becoming unemployed. But this fails to take into account the exponentially improving nature of machine learning. As radiologist Robert Schier wrote in May 2018, “That is an understandable reaction from a practicing radiologist, but it is like looking at a kindergartener and believing that, because she cannot add or subtract very well, she will obviously never be able to read an abdominal ultrasound. It assumes limits to computer intelligence that might not exist.”22 And, as we’ve seen, A.I. is already doing better than human physicians in some areas.

Another profession that you might not expect to be at risk is the legal profession. Only a few decades ago, a law degree was considered a ticket to a solid middle- or upper-middle-class life in the United States. Today, young lawyers are struggling to find jobs, and salaries are stagnant. Automation driven by A.I. has begun to rapidly strip away chunks of what junior attorneys formerly used to do, from contract analysis to document discovery.

Symantec, for example, has a software product, Clearwell, that does legal discovery. Legal discovery is the laborious process of sifting through boxes of documents, reams of e-mails, and numerous other forms of information submitted to the court by litigants. Such tasks used to necessitate armies of junior associates. Clearwell does a far better job and has begun to obliterate an entire class of junior lawyers.

As Thomas Davenport, a distinguished professor at Babson University, wrote in a column titled “Let’s Automate All the Lawyers,” in The Wall Street Journal:

There are a variety of other intelligent systems that can take over other chunks of legal work. One system extracts key provisions from contracts. Another decides how likely your intellectual property case is to succeed. Others predict judicial decisions, recommend tax strategies, resolve matrimonial property disputes, and recommend sentences for capital crimes. No one system does it all, of course, but together they are chipping away at what humans have done in the courtroom and law office.23

More broadly, however, an era of robo-law could be a boon to society. At present, the law remains the province of the well-heeled who have the means to pay for it. O. J. Simpson paid millions for his acquittal (which is mostly notable because of his celebrity and the color of his skin, the same story playing out for rich white defendants with shocking regularity). At the same time, laws are regularly stacked against poor and minority defendants in ways subtle yet devastating. One of the most glaring examples is the huge disparity between penalties for possession of crack cocaine and for possession of the powder form, which only the well-to-do can afford. Chemically and logically, they are the same substance. Whereas human legislators can succumb to bias, an A.I. might be far more even-handed in applying the law.

A.I. will provide similar benefits—and take over human jobs—in most areas in which data are processed and decisions required. WIRED magazine’s founding editor, Kevin Kelly, likened A.I. to electricity: a cheap, reliable, industrial-grade digital smartness running behind everything. He said that it “will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now ‘cognitize.’ This new utilitarian A.I. will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”24

A.I. has made possible voice assistants for our homes that manage our lights, order our food, and schedule our meetings. It will also lead to the creation of robotic assistants such as Rosie of The Jetsons and C-3PO of Star Wars. And they won’t be expensive. Products such as Amazon Echo and Google Home cost less than smartphones do—and will get cheaper. In fact, these A.I. assistants will likely become free applications on our smartphones and tablets.

Does the Technology Foster Autonomy Rather Than Dependence?

Humanity as a whole can benefit from having intelligent computer decision makers helping us. A.I., if developed correctly, will not discriminate between rich and poor, or between black and white. Through smartphones and applications, A.I. is more or less equally available to everyone. The medical and legal advice that A.I. dishes out will surely turn on circumstance, but it won’t be biased as human beings are; it could be an equalizer for society.

So we truly can share the benefits equally. That is the good thing about software-based technologies: once you have developed them, they can be inexpensively scaled to reach millions—or billions. In fact, the more people who use the software, the more revenue it produces for the developers, so they are motivated to share it broadly. This is how Facebook has become one of the most valuable companies in the world: by offering its products for free—and reaching billions.

In considering benefits, we may make the mistake of forgetting that an A.I., no matter how well it emulates the human mind, has no genuine emotional insight or feeling. There are many times when it is important that somebody performing something we classify as a job be connected emotionally with us. We are known to learn and heal better because of the emotional engagement of teachers, doctors, nurses, and others. Unless we appreciate that, we will fail to recognize what we lose when we engage an A.I. in their place.

The major problem with A.I., however, is its risks. This is the discussion I have avoided, the crazy stuff: what happens when it evolves to the point where it is smarter than we are? This is a real concern of luminaries such as Elon Musk, Stephen Hawking, and Bill Gates, who have warned about the creation of a “super intelligence.” Musk said he fears that “we are summoning the demon.”25 Hawking says that it “could spell the end of the human race.”26 And Gates wrote: “I don’t understand why some people are not concerned.”27

The good news is that the engineers and policy makers are working on regulating A.I. to minimize the risks. The tech luminaries who are developing A.I. systems are devising things such as kill switches and discussing ethical guidelines. In 2016, the White House hosted workshops to help it develop possible policy and regulations, and it released two papers offering a framework for how government-backed research into artificial intelligence should be approached and what those research initiatives should look like.28 The central tenet of its report, Preparing for the Future of Artificial Intelligence, is the same as this book’s: that technology can be used for good and evil and that we must all learn, be prepared, and guide it in the right direction.29 I found it particularly interesting that the White House acknowledged that A.I. will take jobs but also help solve global problems. The report concluded: “Although it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.”

Then there is the question of autonomy and dependence. We will surely be as dependent on A.I. as we are on our computers and smartphones. What worries me is a possibility of deceptive virtual assistants, such as Samantha from the movie Her. In the film, a very sensible man, Theodore Twombly, falls in love with Samantha, with no good result. She eventually tells him that she loves hundreds of other people and then loses interest in him because she has evolved way beyond humans.

The good news is that, by the White House’s estimate, Samantha is still a little less than two decades away.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset