Chapter 12

Loose change

Abstract

This chapter concludes the narrative tour through the maze that has characterized American science and technology policy for more than 225 years. The author revisits the major policy trends through the decades and makes the case that science and technology policy has shaped the nation and the world as we know it. The chapter emphasizes that, in addition to marshaling facts, data, analyses and forecasts, science policy achieves its greatest successes when it has applied political savvy, exploited personal relationships, and timed its efforts effectively, as indicated in many of the stories told in the earlier chapters. Several case histories are used to make policy points. The chapter spends considerable time on the story of Bell Laboratories, its wildly successful run of technology successes over decades, and the ultimate Bell System breakup; the author believes that the culture of Bell Labs reflected a critical characteristic of modern science and technology that was not well appreciated at the time, namely how complex the relationships are among basic research, applied research, innovation and development. This leads to a discussion of various models of how scientific advances move from basic research to new technology, including Donald Stokes’s Quadrant Model.

Keywords

American science and technology; Bell Laboratories; World War II; Congress; National economy; National Science Foundation

During the course of two centuries or more, science and technology have transformed American life in ways almost unimaginable. The pace of the transformation has accelerated remarkably in the past few decades, so much so, that as a species we are facing difficulties adapting. In his book, Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations,1 Pulitzer Prize winning author and New York Times columnist Thomas Friedman described the significant challenges we face today, and prescribed a number of bromides that might ease our growing discomfort. He identified the major disruptive technologies—the iPhone, integrated circuits, search engines and the World Wide Web, the Internet, the cloud, DNA sequencing, and cognitive computation (also known as artificial intelligence)—that have fueled the accelerations. In the end, he argued that without a commitment to “lifelong learning” many of us will find it increasingly difficult to cope.

Shining a spotlight on the technologies that have driven the accelerations, as Friedman has done compellingly in his book, is invaluable. But spotting the policies that enabled the technological revolution is essential for successful planning. While most of the advances Friedman identified are creations of the last decade of the 20th century—large-scale integrated circuits being a notable exception—in truth, the era of rapid change began well before. And much of it had to do with policies that dramatically altered industrial research and development and, in the process, ushered in an era of extraordinary scientific creativity, entrepreneurship, and science as a global enterprise.

Prior to World War II, industry played the leading role in the American research theater. Government, to the extent it had any billing, was very much a supporting actor. The end of the war brought with it dramatic change. Industry did not cede the spotlight, but the federal government was no longer in the wings. Within several decades it would command an important part of the stage, becoming a major sponsor of long-term, fundamental research.

The federal government’s dominance in that arena resulted from the confluence of two policy streams. One was targeted: establishing federal agencies devoted to research and funding their programs generously, as Chapter 4 detailed. The other was ancillary, but just as consequential: enforcing antitrust laws more rigorously and changing the tax code. No description of the science and technology policy maze would be complete without an accounting of the impact of those two policy decisions.

The war effort allowed American manufacturers to recover from the devastations of the Great Depression. It also caused them to appreciate the importance of technology, which had propelled the Allies to victory. By the time the Axis Powers surrendered, there were few corporate leaders who did not recognize that innovation held the keys to future profitability of their companies. Corporate giants of the post-war era, stalwarts, such as AT&T, General Electric (GE), and General Motors (GM); and newbies, such as IBM, Hewlett-Packard (H/P), and Xerox, saw “vertical integration” as the operating model that would lead to enduring corporate empires. Building on Vannevar Bush’s proposition that basic research was critically important to technological progress, they established central laboratories that amalgamated basic research, applied research, development, testing, and evaluation—everything needed prior to production and marketing. Many of them achieved great prominence, but none more so than AT&T’s Bell Laboratories. When it came to discovery and innovation, it was la crème de la crème.

Bell Labs attracted the best minds, and it gave them extraordinary latitude to pursue their most creative ideas. And that faith paid off. Its scientists invented the transistor—the foundation of semiconductor electronics—and the charge-coupled device (CCD)—the backbone of high-quality digital imaging. They also developed radio astronomy, the laser,2 and information theory. And they created Unix—the platform used by Apple’s Mac OS (operating system)—as well as the ubiquitous C programming language. Seven Nobel Prizes in physics and one in chemistry went to scientists—fourteen in all—working at Bell Labs. There was no other place in the world quite like it, and it is unlikely there will ever be another, at least in the private sector.

Explaining the extraordinary success of Bell Labs is like fitting pieces of a jigsaw puzzle together to reveal a great work of art. In his book, The Idea Factory: Bell Labs and the Great Age of American Innovation,3 Jon Gertner does just that. He captures the culture that personified Bell Labs, which at its zenith in the late 1960s, employed 15,000 people at its sprawling New Jersey campus. Its staff included more than 1200 Ph.D.s, mostly in physics, chemistry, and materials science. Walter Isaacson’s commentary, “Inventing the Future,”4 which appeared in The New York Times Sunday Book Review in 2012, provides a succinct summary of Gertner’s work.

Although Bell’s management valued long-term research highly, it maintained an unwavering focus on AT&T’s business needs: developing, manufacturing, and capitalizing on technologies that connected people across the country and across the world reliably and at a reasonable cost. Throughout the years, directors of Bell Labs recognized that outstanding scientists and engineers were an invaluable commodity, and they gave them the freedom to discover and invent, trusting that their work would improve AT&T’s bottom line. They also understood that research breakthroughs could be transformational; that developing new technologies required patience; and that complex problem-solving profited from collaborative efforts that cut across scientific and engineering disciplines.

The culture of Bell Labs reflected a critical characteristic of modern science and technology that was not well appreciated at the time: how complex the relationships are among basic research, applied research, innovation, and development. Donald Stokes is generally credited with clarifying the nature of the connections in his 1997 treatise, Pasteur’s Quadrant: Basic Science and Technological Innovation.5 Until then, many science and technology policy professionals viewed the connections as a linear progression: basic research leading to applied research; applied research leading to innovation and development; and development leading to production.

Michael Armacost, president of the Brookings Institution, which published the book, framed the issue with these words, when he wrote in the Preface:

More than fifty years ago, Vannevar Bush released his enormously influential report, Science the Endless Frontier, which asserted a dichotomy between basic and applied science. This view was at the core of the compact between government and science that led to the golden age of scientific research after World War II—a compact that is currently under severe stress. In this book, Donald E. Stokes challenges Bush’s view and maintains that we can only rebuild the relationship between government and the scientific community when we understand what is wrong with this view.

Stokes begins with an analysis of the goals of understanding and use in scientific research. He recasts the widely accepted view of the tension between understanding and use, citing as a model case the fundamental yet use-inspired studies by which Louis Pasteur laid the foundations of microbiology a century ago. Pasteur worked in the era of the “second industrial revolution,” when the relationship between basic science and technological change assumed its modern form. During subsequent decades, technology has been increasingly science-based—with the choice of problems and the conduct of research often inspired by societal needs.

On this revised, interactive view of science and technology, Stokes builds a convincing case that by recognizing the importance of use-inspired basic research we can frame a new compact between science and government.

Replace the word “societal” with “business” in the last line of the second paragraph, and you have the Bell Labs paradigm.

Stokes presented a compelling proposition for reducing the complex research and development relationships to a two-dimensional space, contending that it was more relevant and accurate in today’s world than the “linear model,” which he attributed to Vannevar Bush. A few clarifying words about Stokes’ paradigm in a moment, but first a comment about his assertion that Bush got the relationships wrong. Stokes wrote:

The belief that understanding and use are conflicting goals—and that basic and applied research are separate categories—is captured by the graphic that is often used to represent the “static” form of the prevailing paradigm, the idea of a spectrum of research extending from basic to applied:

Unlabelled Image

This imagery in Euclidean one-space retains the idea of an inherent tension between the goals of understanding and use, in keeping with Bush’s first great aphorism [“Basic research is performed without the thought of practical ends.”], since scientific activity cannot be closer to one of these poles without being farther away from the other.

The distinction of basic from applied research is also incorporated in the dynamic form of the postwar paradigm. Indeed the static basic-applied spectrum associated with the first of Bush’s canons is the initial segment of a dynamic figure associated with Bush’s second canon, the endlessly popular “linear model,” a sequence extending from basic research to new technology:

Unlabelled Image

The belief that scientific advances are converted to practical use by a dynamic flow from science to technology has been a staple of research and development (R&D) managers everywhere. Bush endorsed this belief in a strong form—that basic advances are the principal sources of technological innovation , and this was absorbed into the prevailing vision of the relationship of science to technology. Thus an early report of the National Science Foundation commented in these terms in this “technological sequence” from basic science to technology, which later came to be known as “technology transfer”….

Stokes was correct in asserting that the linear model does not capture the essence of research and development. But he was wrong in claiming that the model was “the staple of R&D managers.” It certainly was not true at Bell Labs, the most successful industrial R&D enterprise of the 20th century. Stokes was also wrong in asserting that Vannevar Bush propounded the “linear model.”

First, Bush never made the case for such a paradigm explicitly. And second, the extent to which he implied such a relationship exists must be viewed in the context of the times. Prior to World War II, the federal government’s support of academic research was vanishingly small. Bush’s objective was to alter the equation substantially. Emphasizing the importance of basic research was, above all, essential to a successful political strategy. And in the end, it worked. Harley Kilgore might have fought Bush over the structure of the National Science Foundation—ultimately prevailing—but he, too, understood the political necessity of stressing basic research in selling Truman on the importance of science in the federal government’s portfolio.

Despite his overreach, Stokes made a compelling case for a new way of thinking about research, innovation, and development. He captured the essence of his proposed paradigm with two simple diagrams. The first, the “Quadrant Model of Scientific Research,” provided a new static picture:

Unlabelled Image

In his original work, Stokes left the lower left quadrant empty. Subsequent science and technology policy scholars recognized that data and taxonomy are essential for research of any kind, and added those categories.

Stokes replaced the linear dynamic model with one showing far greater complexity:

Unlabelled Image

Arguably, even his new model was too simplistic, because it failed to incorporate the critical role existing technology plays in “pure basic research.” The work of Arno Penzias and Robert Wilson, two Bell Labs physicists who won the Nobel Prize in 1978, illustrates that point perfectly. Using a state-of-the-art Bell Labs radio-antenna, originally developed to detect signals reflected from balloon satellites, they made the somewhat serendipitous discovery of “cosmic microwave background radiation.” Their observation was momentous. It provided the smoking gun cosmologists needed to nail down the “big bang” theory of the origin of the universe.

Bell Laboratories was blessed with another benefit: extraordinary financial backing by its parent company, AT&T, which seemingly had limitless supplies of cash to spend on it. It was not a fortuitous circumstance. In 1921, Congress passed the Willis-Graham Act,6 which, in the interests of promoting universal communications access, exempted telephone companies from federal anti-trust laws. During coming decades, AT&T, which was the largest telephone company at the outset, either bought most of its competitors outright, or brought them under the Bell System operating umbrella. By the end of World War II, “Ma Bell,” as the collective enterprise was known, consisted of 22 local Bells, AT&T “Long Lines,” Western Electric, and Bell Labs. The overwhelming majority of Americans were customers of Ma Bell.

It was a cozy relationship; too cozy, critics said. The local Bells and AT&T Long Lines provided the telephone service. Western Electric manufactured all the equipment local Bell and Long Lines customers were allowed to use—unless they paid additional usage fees. And Bell Labs was the powerful R&D arm of the vertically integrated corporation. The federal government set the phone rates, based on the company’s operating costs, and guaranteed AT&T a predictable profit. Every member of Ma Bell’s family was required to pay a fixed fraction of its revenues to Bell Labs. In some sense, every dime Bell Labs spent was a cost AT&T claimed in its negotiations with federal rate regulators. On such a playing field, the company’s premier R&D facility was a freebie.

Overreach is often the downfall of the king of the mountain, and in 1949, as telecommunications began to embrace a raft of related technologies, AT&T’s monopolistic behavior drew the attention of the Justice Department. To stave off more draconian measures, the company agreed to limit its ownership to 85% of the American service networks. The “Consent Decree” allowed AT&T to prosper, and by not altering the Bell Labs financing arrangement, it ensured that the company’s R&D arm would thrive, as well.

The 1949 agreement notwithstanding, critics continued to hammer away at Ma Bell’s undue influence over rapidly developing telecommunications technologies. They argued that AT&T, by virtue of its size and control over the market, was stifling innovation. A vertically integrated monopoly might be good for the company, but it was not good for the country, so Judge Harold Greene ruled decades later. It came at the end of one of the most significant trials involving American science and technology policy.

With fiber optics and cellular mobile communication technologies already visible on the horizon, the Justice Department filed an antitrust action against AT&T in 1974 for restraint of trade, focusing the complaint on Western Electric’s monopoly over the equipment AT&T used throughout its business. William O. “Bill” Baker, a renowned chemist, had recently become president of Bell Labs when the Justice Department hit AT&T with the lawsuit. It came as Bell Labs was just about to turn fifty, and Baker could see that it probably would not be around to celebrate its centennial if the Justice Department prevailed in breaking up the company.7

It’s easy to see how he reached his pessimistic prediction. If the local Bells—or “Baby Bells” as they would be called in the aggregate following the divestiture—were free to purchase equipment on the open market, and if neither they nor Western Electric was obligated to support Bell Labs, the rationale for AT&T’s signature R&D facility would disappear, and its business model would fail.

It took them 8 years, but in early January 1982, AT&T and the Justice Department came to an accommodation. The local Bells would be cut free, Western Electric would have to compete in the open market, and AT&T would be allowed to enter other telecommunications arenas. When the divestiture took place on January 1, 1984, Bell Labs was split in two. The entity that retained the hallowed name became a wholly owned unit of AT&T Technologies, as Western Electric was now called. The other, redesigned to serve the needs of the newly independent Baby Bells, got the moniker Bellcore.

In the immediate aftermath of the settlement, Bell Labs continued to generate extremely high-quality research. For their work in the 1980s, two Lab scientists received physics Nobel Prizes, Steven Chu in 1997 for “laser cooling” of atoms and Horst Störmer in 1998 for discovering the fractional quantum Hall effect. But the divestiture eventually took its toll. In 1996, AT&T jettisoned its manufacturing and research arms, establishing Lucent Technologies as an independent company. A decade later, Lucent merged with Alcatel, a French electronics company, and by 2008, the once vaunted Bell Laboratories claimed only four physicists on its staff. So it was of little consequence when Alcatel-Lucent declared an end to all research in materials science, semiconductors, or any kind of “basic research.”

Bell Labs’ run as a powerhouse of scientific discovery and innovation lasted a little more than 80 years. We usually don’t think of the judiciary as a major player in science and technology policy. But Judge Greene’s decision to accept the Justice Department’s antitrust briefs was just that. The ensuing divestiture began Bell Labs’ long slide into oblivion, as Bill Baker had predicted at the time; but it opened up the field of telecommunications to thousands of new actors. Innovators flourished, venture capitalists made large bets on new technologies, entrepreneurs grew into billionaires, and companies such as Apple, Google, and Facebook became the new faces of the 21st century. All of that might have happened naturally, but it’s more than speculation that Judge Greene accelerated the start of the “Age of Accelerations.”

If you worked or visited Bell Labs before the long slide downward began, you understood that bringing extremely smart people together from different science and technology disciplines all under one roof, giving them significant resources, and allowing them to make decisions with minimal interference from distant management, was the best way to solve big, complex problems.

Steven Chu, the physics Nobel Laureate, took that conviction with him after he left Bell in 1987. He seized the opportunity to put the philosophy to work almost immediately after he arrived at Stanford University that same year. Even though he was a physicist, he initiated an interdisciplinary research effort to tackle difficult questions in biomedicine. Seven years later, after becoming director of Lawrence Berkeley National Laboratory, he challenged the LBNL staff to solve an extremely complex problem: finding efficient ways to turn sunlight into liquid fuels. Success would be a game changer for climate change.

In 2009, he had an opportunity to make an indelible mark in a much larger arena. That was the year he became Secretary of Energy. And in his first budget request, he proposed establishing a series of Energy Innovation Hubs, each having the feel of Bell Labs experience, but on a smaller scale. Each Hub would focus on one of eight grand energy challenges; each would receive 5 years of guaranteed funding; and each would be untethered as much as possible from the Washington bureaucracy.

Chu knew what he wanted, but he was a political novice, and his attempt to change the energy equation turned out to be far more difficult than he had imagined. He sold the Hubs as a set of mini-Bell Labs, but his sales pitch fell flat on Capitol Hill. Members of Congress respected his pedigree and admired his Nobel Prize, but in private many of them confided that he came across as condescending and arrogant. They also criticized him for not adequately prepping his high-level staff, whom they said failed miserably in explaining what the Hubs could achieve that existing Department of Energy programs couldn’t. Even with Democrats fully in charge of the federal government, Chu found himself sailing into the wind with little room to tack.

Chu was brilliant, innovative, ethical, and a devoted public servant; but he was poorly schooled in politics. He had wonderful policy ideas, but he never fully understood how to minimize opposition to them in a town that relishes skewering every new kid on the block, no matter how smart and how honored. The highpoint of Chu’s tenure in Washington was probably his Senate confirmation hearing on January 13, 2009. Accompanied by his family, Chu elicited nothing but praise—even awe—from members of the Energy and Natural Resources Committee in the art-deco Dirksen Hearing Room.8 On that day, Chu had star quality.

If his Washington confirmation hearing engendered awe, an announcement a continent away on August 30, 2011 provoked utter disdain. That was the day Solyndra closed its doors for the last time.9,10 Two years earlier, the Freemont, California solar panel manufacturer had been the recipient of a $535 million loan guarantee as part of the America Recovery and Reinvestment Act (ARRA).11 It happened when a set of Obama Administration policy imperatives converged.

The White House wanted to get money out the door quickly to stimulate the badly failing American economy. It wanted to wean the nation off fossil fuels to combat climate change. It wanted to bolster the emerging American solar panel industry. And to satisfy proponents of renewable energy, it wanted to bring federal government’s treatment of solar and wind power into harmony with its treatment of nuclear power. It very hastily placed a big, but risky, bet on Solyndra.

Chu, who embraced all the imperatives, decided to give the company his personal seal of approval, appearing in person with California Governor Arnold Schwarzenegger at the Freemont groundbreaking ceremony. His decision to do so, at the time, provoked consternation among Beltway cognoscenti. If the Department of Energy’s (DOE) gamble on Solyndra didn’t pay off, Chu would own the fallout of the hurried decision.

Experienced policymakers and bureaucrats had long recognized that visibly endorsing a poorly vetted a project could cost them their jobs if the project turned sour and was more than a blip on the ledger. But Chu was not experienced, and Solyndra was far from a mom-and-pop venture. It had raised more than $800 million in venture capital, and the federal government was on the hook for more than half a billion dollars if the company failed to deliver.

Had Chu designated one of his low-ranking subordinates to represent the DOE at the groundbreaking, he might have survived beyond Obama’s first term. But he was so closely tied to the Solyndra fiasco, that not even his scientific accolades could save him. He resigned shortly after Obama took the oath of office in 2013.

Steve Chu’s rocky tenure at the Department of Energy showed that science and technology policy is not very different from many other innovative endeavors in life. Having a first-rate idea or product is just the beginning. To succeed, you need to know how to market it with a compelling and enticing story. You need to understand your customer—in Chu’s case, members of Congress—and you need to tailor your idea—Energy Innovation Hubs, for example—to meet their needs. You also have to understand the political landscape and know how to protect yourself from the barbs that will inevitably be hurled your way. Chu, whom I know personally, is a truly remarkable scientist and an outstanding public servant, but unfortunately, he never quite accommodated himself to the ways of Washington.

Steve Chu shared the 1997 Nobel prize for developing a process known as “laser cooling.” The seemingly arcane subject involves using laser radiation to bring a beam of atoms traveling thousands of mile per hour to a virtual standstill. In that final state, the atoms behave as if they are almost frozen in place at a temperature close to “absolute zero,” a condition that allows atomic clocks to measure time with extraordinary precision.

Achieving low temperatures—although not nearly as low as Chu and others reached—had been part of the physics research toolkit for decades. But prior to the development of laser cooling, getting there required the use of helium refrigerators. Which brings us to a strange policy saga that began in 1960 and lasted nearly half a century. First, a few preliminaries.

Helium is a noble gas. It is light and chemically inert, and it liquifies at the lowest temperature of any element, never becoming solid, even at a temperature of absolute zero under normal atmospheric conditions. Its properties make it almost essential today for a variety of critical applications, among them, most significantly, semiconductor manufacturing, magnetic resonance imaging (MRI), advanced nuclear reactors, radiological weapons detectors, space applications, and many areas of fundamental physics research.

A century ago, as we noted in Chapter 4, other than party balloons, helium’s key application involved dirigibles. And to that end, only two things about it mattered. It was lighter than air, and it didn’t burn. Recognizing its potential military importance, Congress established a federal program12 in 1925 to capture helium as it emerged as a byproduct of natural gas production and to store it in an underground reserve. And there it sat for several decades, attracting little attention from policymakers.

By 1960, however, other helium applications had become increasingly apparent, leading Congress to authorize a significant upgrade to the infrastructure for helium recovery, purification, and storage. But instead of appropriating the money needed for the improvements, legislators directed the Interior Department to borrow funds from the federal Treasury. The “Helium Act Amendments of 1960,”13 which mandated the new program, further required the Interior Department to repay the borrowed money, including compound interest, within 35 years. To raise the funds, the department would simply sell gas from the Reserve on the open market.

The tab finally came due in 1995, by which time the Helium Reserve owed the Treasury more than $1.4 billion, even though the original cost was less than $20 million. Compound interest, as any investor or lender knows, is a potent financial instrument. It was a truly bizarre situation, because rarely, if ever, does the federal government mandate an authorized program to pay interest on the money the government used to establish the program in the first place.

Nonetheless, the 1960 law required the Interior Department to do just that. But meeting the $1.4 billion obligation in 1 year would require dumping a significant fraction of the gas in the Reserve all at once. That, in turn, would distort the helium market, which, by 1995, had attracted a significant number of private producers. The economic impact on those companies was uppermost in the minds of lawmakers as they debated what eleventh-hour actions to take. With the Office of Science and Technology Policy sitting on the sideline, they never focused on future scientific and technological needs.

The result was the “Helium Privatization Act of 1996,”14 which provided a market accommodation period, but still required the Interior Department to sell off gas in the Reserve, starting in 2005. Moreover, once the loan, including accrued interest, had been paid off, the Reserve would have to close, even if gas remained in the repository. By 2013, as the payout neared completion, the Reserve still contained a large quantity of helium, amounting to about 40% of the quantity produced annually at all of the natural gas wells.

The 1996 law was clear: The Reserve had to be closed, and the remaining helium effectively forfeited. But scientists, who were dismayed by the waste of a precious resource and concerned about what future helium price hikes might mean for research, decided to make their voices heard. They argued that helium supplies were finite, that demand would continue to increase, and that squandering the Reserve was technologically and economically shortsighted. In brief, they said, closing the Reserve was bad science and technology policy.15 They made their case forcefully and compellingly. The result was the “Helium Stewardship Act of 2013,”16–18 which maintained the Reserve, finally taking note of the needs of science and technology, and not simply the needs of legislators to remedy a misguided appropriations workaround dating back 43 years.

Even so, Mark Elsesser, Manager of Science Policy at the American Physical Society, who has followed the issue closely, notes that research is not out of the woods. Increasing industrial and medical demand and potential constriction of future helium supplies will likely drive prices up, placing many university research programs at risk.19 To mitigate such an outcome, Elsesser helped initiate a program with the Defense Logistics Agency that enables academic helium users to partner with DLA and benefit from the agency’s lower negotiated prices.20

During the decades that spanned the helium saga, the world of science and technology changed dramatically. The demise of Bell Labs received immense publicity, but it was only one of the many dramatic transformations that swept over America’s industrial research enterprise in the final decades of the 20th century. Apart from the pharmaceutical sector, most major companies dramatically reduced their spending on long-term research—basic, use-inspired, or applied. “Vertical integration” and “central laboratories,” two catchphrases of the post-World-War II era, were consigned to the dustbins of history.21

Ford, GE, GM, H/P, IBM, Sylvania, Xerox, and a host of other iconic corporations abandoned their full-service R&D facilities. Instead, they scoured the globe for innovations developed by scientists and engineers wherever they were located. They bought up smaller companies for their patent rights, and they struck agreements with universities to license innovations stemming from federally funded research, just as the 1980 Bayh-Dole Act22 envisioned. The corporate game plan no longer involved supporting research for the long haul. In the new environment, speed to market was paramount, and long-term R&D had to be sacrificed.

Significant modifications to the tax code, and profound changes in the behavior of Wall Street traders played big parts in the transformation. The tax code part of the story involves the differential treatment of earned and investment income. The first is subject to a marginal rate that increases as income rises. The second, known as the capital gains rate, is fixed, and applies to income on investments held for a specified period of time, historically between 6 months and 2 years.

Between the early 1950s and the late 1980s, the highest marginal tax rate on earned income fell dramatically.23 At the same time, the tax rate on capital gains remained relatively flat, fluctuating between 20% and 40% over the course of five decades.24 Even though high-income earners rarely, if ever, paid the maximum marginal rate—92% in 1952, for example—the large disparity between earned income and long-term capital gains rates during the 1950s, 1960s, and 1970s was an incentive for shareholders to exercise patience with their investment portfolios.

The 1980s brought about dramatic changes in their behavior. The gap between the two rates was narrowing quickly, and by 1988 it had actually shrunk to zero. There was no longer any tax incentive to hold onto stock longer the company’s near-term forecast.

Technology also began to exert a major influence, as electronic trading grew in importance exponentially. Computer terminals replaced the trading pits on Wall Street, where men—and they were invariably men—who had “seats” on the Exchange used to shout out bids and then swap buy and sell order “slips.”

At almost warp speed, investment firms built their own trading floors and populated them with thousands of their own traders. Mathematicians and physicists, by the hundreds, abandoned science careers in academia for high-paying jobs at “hedge funds.” There, they developed complex trading instruments and algorithms that had little to do with the long-term projections of companies whose stocks and bonds might be on their radar screens. The “quants,” as they were called, wrote the codes; the computers did the rest, and they did it at ever-increasing speeds.

As computers became faster, real-time human decision-making became an impediment to profitmaking. High-frequency trading (HFT) completed financial transactions in fractions of a second, far faster than any individual could manage. HFT needed really smart minds at the beginning and really high-technology at the end. It’s dominance in today’s trading place is staggering. In 2014, more than three quarters of all stock trades took place automatically at lightning speed.

In such an environment, it’s easy to see why corporate projections of future earnings—known as guidance, in financial parlance—extending much beyond a few quarters were far less relevant than they had been a decade or two earlier. Fast, faster, and fastest is what mattered—on Wall Street and in corporate board rooms. To the extent that companies still had functioning R&D engines, almost all of the workings were labeled with the D for Development.

There were exceptions: pharmaceuticals, most prominently, and those imbued with the culture of California’s Silicon Valley. Apple, Google, Facebook, Intel, and AMD, for example, were young enough to retain their entrepreneurial character and commitment to innovation. But in the rest of industrial America, companies increasingly looked for a path to new products that began far outside the factory gate or the corporate campus. More often than not it originated in universities or national laboratories, where long-term scientific research was the currency of the realm. The challenge was how to convert those breakthroughs into innovations and profitable industrial products. Optimizing “technology transfer,” as the process is known in the policy world, and avoiding Ehlers’ Valley of Death, has turned out to be far more difficult than policymakers originally imagined. It remains a work-in-progress.

James Simons, a stellar mathematician, was one of the original quants. In 1982, he founded Renaissance Technologies and, after earning tens of billions of dollars during the next few decades, he turned his attention to philanthropy, using a sizable portion of his profits to fund scientific research in universities, and at his own Flatiron Institute in lower Manhattan. Simons might be unique among quants in focusing on philanthropic support of science later in life. But the Simons Foundation, which he established in 1994, is by no means a lone ranger in the science philanthropy world. Niche players before the 2010 congressional election, which ushered in the age of American populism, science philanthropies began to grow in prominence as dysfunction became the Washington norm. They also began to grow in number.

The America COMPETES Act of 200725 had made a compelling case for federal support of basic science, arguing that innovation, economic growth, and global competitiveness depended on it. The 2007 legislation had set down markers for federal science agencies and their budgets, but 5 years later, it was clear the commitments would not be fulfilled anytime soon.

Assessing the gloomy Washington outlook and seeing the risks federal inaction posed to American scientific and technological leadership, six foundations launched the Science Philanthropy Alliance (SPA)26 in 2012. They viewed SPA, not as a substitute for the dominant role the federal government played in supporting basic science, but rather as a means of filling critical research gaps. Within half a dozen years, SPA’s membership had grown to twenty-four and its philanthropic reach had expanded significantly. In 2017, according to SPA’s survey,27 private support of basic research at major universities topped $2.3 billion, most of it for work in the life sciences. In that same year, by contrast, federal spending on basic research totaled about $34 billion.28 Foundations and individual philanthropists were far from achieving major billing, but they were no longer bit players.

It’s worth considering how the rise of private science giving might affect basic research. There are four obvious entries on the positive side of the ledger. Private giving can be opportunistic and effectively target deficiencies in the federal portfolio; it can take risks that federal bureaucrats shun; it can help smooth out federal budgetary swings; and it can partner with the federal government on major initiatives.

But the negative side of the ledger also needs to be examined. As private giving increases, legislators might well see it as a substitute for federal appropriations, allowing science budgets to be trimmed, especially in times of ballooning deficits. Philanthropies and foundations can be more opportunistic than federal agencies, but they can also be more capricious: they’re beholden to their small number of donors and board members, rather than millions of voters and taxpayers. They do not have to use a peer-review process, which, although far from perfect, usually provides protection from frivolous scientific ventures.

To skew the ledger more toward the positive side, private givers might consider a carrot to encourage good legislative behavior and a stick to discourage bad behavior. First the carrot—to promote higher federal support of basic research, philanthropies could offer to match appropriations increases up to a specified dollar amount. Now the stick—to deter appropriators from trimming science spending after they have assessed philanthropic commitments, private givers could refrain from developing their annual budgets until the appropriations process has ended.

There is little argument that federal support of basic research measured against the national economy has been stagnant for decades, hovering around 0.4% of the gross domestic product (GDP).29 That trend does not portend well for the future of a nation whose prosperity is increasingly tied to technological innovation.

The warning signs have been visible for some time. For example, in the “Global Innovation Index 2018 Report,”30 compiled by INSEAD, Cornell University and the World Intellectual Property Organization (WIPO), the United States still ranks only sixth—almost unchanged over the last decade—behind Switzerland, the Netherlands, Sweden, the United Kingdom, and Singapore, and barely ahead of Finland, Denmark, and Germany. And according to a number of economic analyses, wages of the average American in the 21st century have been suppressed, in part, by lagging productivity31 traceable to flagging innovation.

But innovation is double-edged sword. It can improve the lot of the average worker by increasing productivity and take-home pay, as it did for three decades following the end of World War II. By so doing, it can ameliorate income and wealth disparity, which Thomas Piketty explored at length in his treatise, Capital in the Twenty-First Century.32 But it can also produce technological disruptions that lead to permanent workforce displacement, as PricewaterhouseCoopers’s John Hawksworth and Richard Berriman detail in a 2018 PwC report33 on the potential impact of automation in the 21st century. Managing these countervailing influences will be one of the biggest challenges for science and technology policymakers in the coming years.

We will look at the challenge in the context of Donald Trump’s 2016 successful election in the Epilogue. But before that, we need to look at two more issues that are vital to policymaking in the current era: the tension between globalism and nationalism and the disparities in STEM (science, technology, engineering, and math) education outcomes.

The last decade of the 20th century and the first two decades of the 21st century have seen extraordinary socio-economic transformations sweep across the continents, and most of them have been driven by advances in technology. Globalization, once a term confined to the realm of economics or foreign affairs, entered the popular idiom following Thomas Friedman’s 2005 international best seller, The World Is Flat.34 Telecommunications and, more generally, information technology made national boundaries fuzzier. Manufacturing became a global enterprise: cars assembled in Detroit might use parts fabricated in Mexico, China, Japan, or Germany. Service centers could provide customers with assistance 24 hours a day because they were located across twenty-four time zones.

Everyone, everywhere could connect with anyone, anywhere using smart phones, tablets, and laptops. If email was too cumbersome, you could send text messages or use WhatsApp. If you craved communities, you could find them on Facebook, Instagram, or LinkedIn. If you wanted to spread the gospel, you could tweet or post. If you needed to hear somebody’s voice or see somebody’s face, you could Skype across the world or use FaceTime.

Science, which was the initiator of the transformation, became a global enterprise, itself. Researchers worked in international teams at facilities located on almost every continent. When they could, they collaborated remotely. They shared their work on electronic bulletin boards, often before they published it in peer-reviewed journals that used bits instead of ink. Students from one country studied at universities in another country: sometimes they emigrated and sometimes they returned home. The face of the world was the face of science.

The changes happened in less time than it takes a baby to reach physical maturity. But, as any parent knows, reaching emotional maturity takes longer. That comparison fairly well describes the mismatch between the 21st century technological revolution and the human capacity to adapt to it. Globalism surrounds us, but tribalism is still alive and well.

The tension between nationalism and internationalism is profound, more so in today’s technologically interconnected world than at any time in the last half century. Policymakers must confront it in every area they deal with, from foreign affairs, defense, and trade to science, intellectual property, and taxation. And they must do so while navigating a maze filled with lobbyists, politicians, corporate leaders, financial titans, social activists, teachers, academicians, research scientists, and laboratory directors, all while contending with a twenty-four-hour news cycle and the drumbeat of social media. Finding a viable route through the Los Angeles freeway system at peak commuter time is easy compared with solving a problem that has the potential to disrupt the world order.

Of no lesser importance, but perhaps somewhat more tractable, is STEM education. As the PricewaterhouseCoopers (PwC) analysis suggests, automation could displace up to two in five American workers within 15 years.35 It doesn’t mean that the American workforce will have to shrink by 40%. There will be new jobs, but they will require different skills than the old ones, and most of those skills will fall under the STEM umbrella. It’s also worth noting that the job losses will affect white collar as well as blue collar workers, as the impact of artificial intelligence (AI) becomes more pervasive.

Creative destruction, the term economists use to describe what is in store for us, is not a new phenomenon. For more than two centuries it has driven America’s economic growth. But there are two differences this time around. First, the new skills will require more STEM proficiency, something a vast number of workers lack. And second, the average worker is likely to confront the impact of creative destruction more than once in a lifetime. As Tom Friedman emphasizes, coping with the persistent impact of technology will require a commitment to lifelong learning.36

If the PwC forecast is even remotely correct, politicians who are pushing universities to focus their curriculum on training students for existing job opportunities will be on the wrong side of history. The jobs of today will almost certainly not be the jobs of tomorrow. Providing students with broad-based STEM skills is far better education policy than focusing narrowly on the proficiencies job recruiters might be seeking today.

Lifelong learning and improving STEM education are both achievable goals. But there is another problem that requires more study, and probably a more comprehensive policy solution. It’s the impact of child poverty on STEM proficiency.

Every 3 years, the Organization for Economic Cooperation and Development (OECD) conducts an international survey of educational proficiencies among 15-year-old students. In math and science, American students invariably perform abysmally—ranking 40th in math and 25th in science in the 2015 survey among the 72 participating nations.37 The poor overall performance might seem shocking, but it tells only one part of the story. The other part lends some clarity to the scores and indicates what needs to be done.

The Program for International Student Assessment (PISA), as the survey is called, also samples the economic, cultural, and social status (ESCS) of the students taking the exam. And the ESCS results are incredibly revealing when it comes to child poverty. Using the eligibility for a free or reduced-price school lunch as a measure of privation, it’s possible to sort the PISA scores accordingly. And the disparities are dramatic.

In science,38 for example, students in schools where 75% or more attendees were eligible for the lunch program scored 446 points on the PISA exam, well below the OECD average of 496. By contrast, students in schools where 10% or fewer were eligible scored 553, a disparity of more than one hundred points. The results for math39 show a similar ESCS gap.

The problem in the United States is more acute than in many other OECD countries, because American child poverty rates are far higher than they are in in similarly advanced nations. In weak economic times, almost one in three American children grow up in families living below the poverty line. In better times, it’s one in five.40 By contrast, in Denmark its one in 34; in Finland, one in 27; and throughout most of Western Europe, on average about one in 9 or 10.41

As STEM skills become more and more important, consigning 20% of the population to a grim future would not only be a drag on the American economy, it could pose a threat to the American democracy, as income disparity continues to widen beyond what it is today. Finding solutions to the problem should be on the critical path of policymakers and elected officials. Our Kids,42,43 by the Harvard sociologist Robert Putnam should be required reading for all of them.

As the second decade of the 21st century draws to a close, several other science and technology issues remain unresolved. The dramatic advances in information technology and artificial intelligence, which continue to create the disruptive societal accelerations Thomas Freidman wrote about in Thank You for Being Late,43 pose extraordinary challenges for policymakers well beyond the workforce dislocations PwC has predicted. They go to the very heart of science as a historical province of the elite.

In a populist era, being part of an exclusive social class is not a good place to be. But changing the culture of the scientific community is a tall order, especially if the community doesn’t see it as a necessity. And so far, scientists haven’t recognized it as a significant problem.

Technology has also created a major challenge to the way research findings are shared. Bits in the cloud, rather than ink on a page, in principle, make it possible for anyone to ferret out the latest scientific discoveries with just a few clicks of a computer mouse—but only if the results are freely available. If taxpayer money has been used to support scientific research, the resulting discoveries should be available at no additional cost to any member of the public who wants to see them, or so proponents of the movement known as Open Access argue. Scientific publishers, who regard themselves as guardians of the “truth,” respond that “peer review” of scientific manuscripts by experts is essential to keep poorly-tested theories or bogus results from seeing the light of day.

Open science advocates parry the riposte by noting first that peer review is far from perfect, and second, that complete openness allows far more scrutiny by a wider range of experts. Balancing the pros and cons of open science—which also includes making vast amounts of scientific data freely available—requires policymakers to execute a high-wire act they have rarely encountered before.

Making scientific findings more comprehensively and more readily available also places a greater premium on scientific reproducibility, because greater openness carries with it greater scrutiny. With a populace that has become highly skeptical of institutions of all kinds—government, universities, industries, and the science and technology enterprise as a whole—just a few highly visible missteps can do irreparable damage to public trust in science. If that trust disappears, so, too, will support for using taxpayer money to pay for research.

Grand challenges exist, but so do grand opportunities. Two of them are related to breakthroughs in medicine. CRISPR-Cas9,44 or simply CRISPR, is a genome editing tool that made headlines in 2014. It is faster, less expensive, and more accurate than other existing editing methods, and it holds the promise of generating preventative cures for myriad diseases at reasonable costs.

But life in the science policy world is never simple, especially where health is concerned. Using CRISPR in the research laboratory is one thing. Using the technique on human subjects is quite another. Even if it is shown to be abundantly safe, medical ethicists and policymakers will have to grapple with the question of whether it is proper to use CRISPR to enhance desirable traits, such as intelligence or physical appearance, or restrict its use to modifying genes associated with illnesses such as cancer, heart disease, and mental disorders.

Precision medicine,45 is less fraught. It relies on assembling information on gene variability, environmental influences, and the effect of personal lifestyles in order to tailor the treatment of each patient individually, rather than apply a one-size-fits-all protocol. A holistic approach to medicine is certainly not new, but combining it with “big data,” as precision medicine does, is new, and it promises to be a big deal.

One more observation: American society is far more diverse today than it was half a century ago. And the developing world is far more developed now than it was just two decades ago. How we deal with the changes will determine the future of America and the world. If we adopt policies that promote inclusion, we can make science a unifying proposition. By so doing, we can improve the human condition far beyond the benefits technology, alone, can provide.

This brings us to the end of our navigation through the maze that has characterized American science and technology policy for more than 225 years. I have tried to highlight the essentials, as I perceive them; but some critics undoubtedly will take issue with my choices. By using historical narratives, I have attempted to weave a fabric that is rich in texture, colorful in appearance, and enduring in utility.

I will summarize the tour with a few final thoughts. Science and technology policies have shaped the nation and the world as we know it today. They have had a remarkable run. In pursuit of successful outcomes they have, their practitioners have marshalled facts, data, analyses, and forecasts, which, taken together, constitute what George W. Bush’s science adviser, Jack Marburger, called the science of science policy. But they have achieved their greatest successes when they have applied political savvy, exploited personal relationships, and timed their efforts perfectly. And in some cases, they have benefitted from serendipity, or in the vernacular, dumb luck.

Today, the maze is far more complex than it was when our nation was founded. There are far more vehicles trying to navigate it, and there are far more drivers of those vehicles. Whether science and technology will continue to maximize societal benefits and minimize societal harms will depends on how effectively the drivers navigate the maze. I hope this book will help them achieve that goal.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset