♦   2   ♦

Welcome to Moore’s World

Parked on the tarmac of Heathrow Airport, in London, is a sleek airliner that aviation buffs love. The Concorde was the first passenger airliner capable of flying at supersonic speed. Investment bankers and powerful businessmen raved about the nearly magical experience of going from New York to London in less than three hours. The Concorde was and, ironically, remains the future of aviation.

Unfortunately, all the Concordes are grounded. Airlines found the service too expensive to run and unprofitable to maintain. The sonic boom angered communities. The plane was exotic and beautiful but finicky. Perhaps most important of all, it was too expensive for the majority, and there was no obvious way to make its benefits available more broadly. This is part of the genius of Elon Musk as he develops Tesla: that his luxury company is rapidly moving downstream to become a mass-market player. Clearly, though, in the case of the Concorde, the conditions necessary for a futuristic disruption were not in place. They still are not, although some people are trying, including Musk himself, with his Hyperloop transportation project. (A quick update on this story: a startup in the United States is building a next-generation supersonic passenger jetliner that will cut the flight time from San Francisco to Tokyo from eleven hours to four and a half hours.)6

Another anecdote from London: in 1990, a car service called Addison Lee launched to take a chunk out of the stagnant taxi market. The service allowed users to send an SMS message to call for the cab, and a software-driven, computerized dispatch system ensured that drivers would pick up the fare seekers anywhere in the city within minutes.7 This is, of course, the business model of Uber. But Addison Lee is available only in London; its management has never sought to expand to new cities.

In 2013 Addison Lee was sold to private-equity firm Carlyle Group for an estimated £300 million.8 In August 2018, Uber was valued at $72 billion, two hundred times the worth of Addison Lee, by Toyota, which invested $500 million in it.9 That’s because each of us can use the same Uber application in hundreds of cities around the world to order a cab that will be paid for by the same credit card, and we have a reasonable guarantee that the service will be of high quality. From day one, Uber had global ambition. Addison Lee had the same idea but never pursued the global market.

This ambition of Uber’s extends well beyond cars. Uber’s employees have already considered the implications of their platform and view Uber not as a car-hailing application but as a marketplace that brings buyers and sellers together. You can see signs of their testing the marketplace all the time, ranging from comical marketing ploys such as using Uber to order an ice-cream truck or a mariachi band, to the really interesting, such as “Ubering” a nurse to offer everyone in the office a flu vaccine. Uber’s former CEO, Travis Kalanick, claimed that ride-sharing services would replace car ownership entirely once self-driving car fleets entered the mainstream.10 What will happen to the humans who drive for Uber and other ride-sharing services today remains an open question.

So what makes conditions ripe for a leap into the future in any specific economic segment or type of service? There are variations across the spectrum, but a few conditions tend to presage such leaps. First, there must be widespread dissatisfaction, either latent or overt, with the status quo. Many of us loathe the taxi industry (even if we often love individual drivers), and most of us hate large parts of the experience of driving a car in and around a city. No one is totally satisfied with the education system. Few of us, though we may love our doctors, believe that the medical system is doing its job properly, and scary stats about deaths caused by medical errors—now understood to be the third-highest cause of fatality in the United States—bear out this view. None of us likes our electric utility or our cell-phone provider or our cable-broadband company in the way we love Apple or enjoy Ben & Jerry’s ice cream. Behind all of these unpopular institutions and sectors lies a frustrating combination of onerous regulations, quasi-monopolistic franchises (often government sanctioned) or ownership of scarce real estate (radio spectrum, medallions, permits, etc.), and politically powerful special interests.

That dissatisfaction is the systemic requisite. Then there are the technology requisites. All of the big (and, dare I say, disruptive) changes we now face can trace their onset and inevitability to Moore’s Law. This is the oft-quoted maxim that the number of transistors per unit of area on a semiconductor doubles every eighteen months. Moore’s Law explains why the iPhone or Android phone you hold in your hand is considerably faster than supercomputers were decades ago and orders of magnitude faster than the computers NASA employed in sending men to the moon during the Apollo missions.

Disruption of societies and human lives by new technologies is an old story. Agriculture, gunpowder, steel, the car, the steam engine, the internal-combustion engine, and manned flight all forced wholesale shifts in the ways in which humans live, eat, make money, or fight each other for control of resources. This time, though, Moore’s Law is leading the pace of change and innovation to increase exponentially.

Across the spectrum of key areas we are discussing—health, transport, energy, food, security and privacy, work, and government—the rapid decrease in the cost of computers is poised to drive amazing changes in every field that is exposed to technology; that is, in every field. The same trend applies to the cost of the already cheap sensors that are becoming the backbone both of the web of connected devices called the Internet of Things (I.o.T.) and of a new network that bridges the physical and virtual worlds. More and more aspects of our world are incorporating the triad of software, data connectivity, and handheld computing—the so-called technology triad—that enables disruptive technological change.

Another effect of this shift will be that any discrete analog task that can be converted into a networked digital one will be, including many tasks that we have long assumed a robot or a computer would never be able to tackle. Robots may seem humanlike and will do humanlike things. Or we may deliberately make them robot-like because their similarity to humans becomes so eerily accurate that it freaks out the humans! This is exactly the reaction we saw when Google demonstrated its Duplex voice-generation and -recognition system in May 2018. Oh, the irony!

A good proportion of experts in artificial intelligence believe that such a degree of intelligent behavior in machines is several decades away. Others refer often to a book by the most sanguine of all the technologists, noted inventor Ray Kurzweil. Kurzweil, in his book How to Create a Mind: The Secret of Human Thought Revealed, posits: “[F]undamental measures of information technology follow predictable and exponential trajectories.” He calls this hypothesis the “law of accelerating returns.”11 We’ve discussed the best-recognized of these trajectories, Moore’s Law. But we are less familiar with the other critical exponential growth curve to emerge in our lifetime: the volume of digital information available on the Internet and, now, through the Internet of Things. Kurzweil measures this curve in “bits per second transmitted on the Internet.” By his measure (and that of others, such as Cisco Systems), the amount of information buzzing over the Internet is doubling roughly every 1.25 years.12 As humans, we can’t keep track of all this information or even know where to start. We are now creating more information content in a single day than we created in decades or even centuries in the pre-digital era.

The key corollary that everyone needs to understand is that as any technology becomes addressable by information technology (i.e., computers), it becomes subject to the law of accelerating returns. For example, now that the human genome has been translated into bits that computers process, genomics becomes de facto an information technology, and the law of accelerating returns applies. When the team headed by biochemist and geneticist J. Craig Venter announced that it had effectively decoded 1 percent of the human genome, many doubters decried the slow progress. Kurzweil declared that Venter’s team was actually halfway there, because, on an exponential curve, the time required to get from 0.01 percent to 1 percent is equal to the time required to get from 1 percent to 100 percent.

Applying this law to real-world problems and tasks is often far more straightforward than it would seem. Many people said that a computer would never beat the world’s best chess grandmaster. Kurzweil calculated that a computer would need to calculate all possible combinations of the 100,000 possible board layouts in a game and do that rapidly and repeatedly in a matter of seconds. Once this threshold was crossed, then a computer would beat a human. Kurzweil mapped that threshold to Moore’s Law and bet that the curves would cross in 1998, more or less. He was right.

To be clear, a leap in artificial intelligence that would make computers smarter than humans in so-called general intelligence is a task far different from and more complicated than a deterministic exercise such as beating a human at chess. So how long it will be until computers leap to superhuman intelligence remains uncertain.

There is little doubt, though, about the newly accelerating shifts in technology. The industrial revolution unfolded over nearly one hundred years. The rise of the personal computer spanned forty-five years and still has not attained full penetration on a global scale. Smartphones are approaching full penetration in half that period. (For what it’s worth, I note that tablet computers attained widespread usage in the developed world even faster than smartphones had.)

Already the general consensus among researchers, NGOs, economists, and business leaders holds that smartphones have changed the world for everyone.

It’s easy to see why they all agree. In the late 1980s, a cell phone—of any kind, let alone a smartphone—remained a tremendous luxury. Today, poor farmers in Africa and India consider the smartphone a common tool by which to check market prices and communicate with buyers and shippers. This has introduced rich sources of information into their lives. Aside from calling distant relatives as they could on their earlier cell phones, they can now receive medical advice from distant doctors, check prices in neighboring villages before choosing a market, and send money to a friend. In Kenya, the M-Pesa network, using mobile phones, has effectively leapfrogged legacy banking systems and created a nearly frictionless transaction-and-payment system for millions of people formerly unable to participate in the economy except through barter.13

The prices of smartphones, following the curve of Moore’s Law downward, have fallen so much that they are nearly ubiquitous in vibrant but still impoverished African capitals such as Lagos. Peter Diamandis observed, in his book Abundance: The Future Is Better Than You Think, that these devices provide Masai warriors in the bush with access to more information than the president of the United States had access to about two decades ago.14 Already the prices of smartphones and tablet computers as powerful as the earlier iPhones and iPads have fallen to less than $30 in China and India, putting the power of a connected supercomputer into the hands of all but the poorest of the poor. By 2023, those smartphones will have more computing power than our own brains.* (That wasn’t a typo—at the rate at which computers are advancing, the iPhone 12 or 13 will have greater computing power than our brains do.)

The acceleration in computation feeds on itself, ad infinitum. The availability of cheaper, faster chips makes faster computation available at a lower price, which enables better research tools and production technologies. And those, in turn, accelerate the process of computer development. But now Moore’s Law applies, as we have described above, not just to smartphones and PCs but to everything. Change has always been the norm and the one constant; but we have never experienced change like this, at such a pace, or on so many fronts: in energy sources’ move to renewables; in health care’s move to digital health records and designer drugs; in banking, in which a technology called the blockchain distributed ledger system threatens to antiquate financial systems’ opaque procedures.*

It is noteworthy that, Moore’s Law having turned fifty, we are reaching the limits of how much you can shrink a transistor. After all, nothing can be smaller than an atom. But Intel and IBM have both said that they can adhere to the Moore’s Law targets for another five to ten years. So the silicon-based computer chips in our laptops will surely match the power of a human brain in the early 2020s, but Moore’s Law may fizzle out after that.

What happens after Moore’s Law? As Ray Kurzweil explains, Moore’s law isn’t the be-all and end-all of computing; the advances will continue regardless of what Intel and IBM can do with silicon. Moore’s Law itself was just one of five paradigms in computing: electromechanical, relay, vacuum tube, discrete transistor, and integrated circuits. Technology has been advancing exponentially since the advent of evolution on Earth, and computing power has been rising exponentially: from the mechanical calculating devices used in the 1890 U.S. Census, via the machines that cracked the Nazi enigma code, the CBS vacuum-tube computer, the transistor-based machines used in the first space launches, and more recently the integrated circuit–based personal computer.

With exponentially advancing technologies, things move very slowly at first and then advance dramatically. Each new technology advances along an S-curve—an exponential beginning, flattening out as the technology reaches its limits. As one technology ends, the next paradigm takes over. That is what has been happening, and it is why there will be new computing paradigms after Moore’s Law.

Already, there are significant advances on the horizon, such as the graphics-processor unit, which uses parallel computing to create massive increases in performance, not only for graphics, but also for neural networks, which constitute the architecture of the human brain. There are 3-D chips in development that can pack circuits in layers. IBM and the Defense Advanced Research Projects Agency are developing cognitive computing chips. New materials, such as gallium arsenide, carbon nanotubes, and graphene, are showing huge promise as replacements for silicon. And then there is the most interesting—and scary—technology of all: quantum computing.

Instead of encoding information as either a zero or a one, as today’s computers do, quantum computers will use quantum bits, or qubits, whose states encode an entire range of possibilities by capitalizing on the quantum phenomena of superposition and entanglement. Computations that would take today’s computers thousands of years, these computers will perform in minutes, as we will see in chapter 16.

So the computer processors that fuel the technologies that are changing our lives are getting ever faster, smaller, and cheaper. There may be some temporary slowdowns as they first proceed along new S-curves, but the march of technology will continue. These technology advances already make me feel as if I am on a roller coaster. I feel the ups and downs as excitement and disappointment. Often, I am filled with fear. Yet the ride has only just started; the best—and the worst—is ahead.

Are we truly ready for this? And, more important, how can we better shape and control the forces of that world in ways that give us more agency and choice?

* This is not to say that smartphones will replace our brains. Semiconductors and existing software have thus far failed to pass a Turing Test (by tricking a human into thinking that a computer is a person), let alone provide broad-based capabilities that we expect all humans to master in language, logic, navigation, and simple problem solving. A robot can drive a car quite effectively, but thus far robots have failed to tackle tasks that would seem far simpler, such as folding a basket of laundry. The comprehension of the ever-changing jumble of surfaces that this task entails is something that the human brain does without even trying.

* The blockchain is an almost incorruptible digital ledger that can be used to record practically anything that can be digitized: birth and death certificates, marriage licenses, deeds and titles of ownership, educational degrees, medical records, contracts, and votes. Bitcoin is one of its many implementations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset