21

Beyond Business

Much popular discussion about AI concerns issues of society rather than business. Many are not sure that AI will be a good thing. Tesla CEO Elon Musk has been one of the most consistent, high-profile, and experienced individuals sounding alarm bells: “I have exposure to the very cutting-edge AI, and I think people should be really concerned about it…. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”1

Another learned expert with an opinion on this is renowned psychologist and Nobel laureate Daniel Kahneman. Among non-academics, he may be best known for his 2011 book, Thinking, Fast and Slow. In 2017, at a conference we organized in Toronto on the economics of artificial intelligence, he explained why he thinks AIs will be wiser than humans:

A well-known novelist wrote me some time ago that he’s planning a novel. The novel is about a love triangle between two humans and a robot and what he wanted to know is how the robot would be different from the people.

I proposed three main differences. One is obvious: the robot will be much better at statistical reasoning and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence.

The third is that the robot would be wiser. Wisdom is breadth. Wisdom is not having too narrow a view. That is the essence of wisdom; it’s broad framing. A robot will be endowed with broad framing. I say that when it has learned enough, it will be wiser than we people because we do not have broad framing. We are narrow thinkers, we are noisy thinkers, and it is very easy to improve upon us. I do not think that there is very much that we can do that computers will not eventually [learn] to do.

Elon Musk and Daniel Kahneman are both confident about AI’s potential and simultaneously worried about the implications of unleashing it on the world.

Impatient about the pace at which government responds to technological advances, industry leaders have offered policy suggestions and, in some cases, have acted. Bill Gates advocated for a tax on robots that replace human labor. Sidestepping what would normally be government’s purview, the high-profile startup accelerator Y Combinator is running experiments on providing a basic income for everyone in society.2 Elon Musk organized a group of entrepreneurs and industry leaders to finance Open AI with $1 billion to ensure that no single private-sector company could monopolize the field.

Such proposals and actions highlight the complexity of these social issues. As we climb to the pyramid’s top, the choices become strikingly more complex. When thinking about society as a whole, the economics of AI are not so simple anymore.

Is This the End of Jobs?

If Einstein has a modern incarnation, it is Stephen Hawking. Thanks to his remarkable contributions to science, despite his personal struggle with ALS, and his popular books like A Brief History of Time, before his death in March 2018 Hawking was seen as the world’s canonical genius. Thus, people unsurprisingly took notice when, in December 2016, he wrote: “The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”3

Several studies had already tallied up potential job destruction due to automation, and this time it wasn’t just physical labor but also cognitive functions previously believed immune to such forces.4 After all, horses fell behind in horsepower, not brainpower.

As economists, we’ve heard these claims before. But while the specter of technological unemployment has loomed since the Luddites destroyed textile frames centuries ago, unemployment rates have been remarkably low. Business managers may be concerned about shedding jobs by adopting technologies like AI; however, we can take some comfort in the fact that farming jobs started to disappear over one hundred years ago, without corresponding long-term mass unemployment.

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines.5

How might an economist approach this question? Imagine that a new island entirely populated by robots—Robotlandia—suddenly emerged. Would we want to trade with that island of prediction machines? From a free-trade perspective, it sounds like a great opportunity. The robots do all manner of tasks, freeing up our people to do what they do best. In other words, we would no more refuse to deal with Robotlandia than we would require our coffee beans to be locally grown.

Of course, no real Robotlandia exists, but when we have technological change that gives software the ability to do new tasks more cheaply, economists see it as similar to opening up trade with such a fictitious island. In other words, if you favor free trade between countries, then you favor free trade with Robotlandia. You support developing AI, even if it replaces some jobs. Decades of research into the effects of trade show that other jobs will appear, and overall employment will not plummet.

Our anatomy of a decision suggests where these new jobs are likely to come from. Humans and AIs are likely to work together; humans will provide complements to prediction, namely, data, judgment, or action. For example, as prediction becomes cheaper, the value of judgment rises. We therefore anticipate growth in the number of jobs that involve reward function engineering. Some of these jobs will be very skilled and highly compensated, filled by people who were applying that judgment before the prediction machines arrived.

Other judgment-related jobs will be more widespread, but perhaps less skilled than the jobs the AIs replace. Many of today’s highest-paid careers have prediction as a core skill, including those of doctors, financial analysts, and lawyers. Just as machine predictions of directions led to reduced incomes for relatively highly paid London taxi drivers but an increase in the number of lower-paid Uber drivers, we expect to see the same phenomenon in medicine and finance. As the prediction portion of tasks is automated, more people will fill these jobs, focusing more narrowly on judgment-related skills. When prediction is no longer a binding constraint, demand may increase for complementary skills that are more widespread, leading to more employment but at lower wages.

AI and people have one important difference: software scales, but people don’t. This means that once an AI is better than humans at a particular task, job losses will happen quickly. We can be confident that new jobs will arise within a few years and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question, even if free trade with Robotlandia will not affect the number of jobs in the long term.

Will Inequality Get Worse?

Jobs are one thing. The income they generate is another. Opening up trade often creates competition, and competition causes prices to drop. If the competition is with human labor, then wages fall. In the case of opening trade with Robotlandia, robots compete with humans for some tasks, so wages for those tasks fall. If those tasks make up your work, then your income may go down. You are facing more competition.6

As with trade between countries, winners and losers from trade with machines will appear. Jobs will still exist, but some people will have less appealing jobs than they have now. In other words, if you understand the benefits of free trade, then you should appreciate the gains from prediction machines. The key policy question isn’t about whether AI will bring benefits but about how those benefits will be distributed.

Because AI tools can be used to replace “high” skills—namely, brainpower—many worry that even though jobs exist, they won’t come with high wages. For example, while serving as chair of President Obama’s Council of Economic Advisers, Jason Furman expressed his concern this way:

My worry is not that this time could be different when it comes to AI, but that this time could be the same as what we have experienced over the past several decades. The traditional argument that we do not need to worry about the robots taking our jobs still leaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages.7

If the machines’ share of work continues to increase, then workers’ income will fall, while that accruing to the owners of the AI will rise.

In his best-selling book, Capital in the Twenty-First Century, Thomas Piketty highlighted that for the past few decades, labor’s share of national income (in the United States and elsewhere) has been falling in favor of the share earned by capital. This trend is concerning because it has led to increased inequality. The critical question here is whether AI will reinforce this trend or mitigate it. If AI is a new, efficient form of capital, then the capital share of the economy will likely continue to rise at the expense of labor.

No easy solutions exist for this problem. For example, Bill Gates’s suggestion to tax robots will reduce inequality but will make buying robots less profitable. So companies will invest less in robots, productivity will slow, and we will be poorer overall. The policy trade-off is clear: we have policies that can reduce inequality but likely at the cost of lower income overall.

A second trend leading to increased inequality is that technology is often skill-biased. It disproportionately increases the wages of highly educated people and might even decrease the wages of the less educated. Previous skill-biased technologies, including computers and the internet, are the dominant explanation for the increasing wage inequality in the United States and Europe over the past four decades. As economists Claudia Goldin and Lawrence Katz put it, “[i]ndividuals with more education and higher innate abilities will be more able to grasp new and complicated tools.”8 We have no reason to expect AI to be any different. Highly educated people tend to be better at learning new skills. If the skills needed to succeed with AI change more often, then the educated will benefit disproportionately.

We see many reasons that the productive use of AI will require additional skills. For example, the reward function engineer must understand both the objectives of the organization and the capabilities of the machines. Because machines scale efficiently, if this skill is scarce, then the best engineers will reap the benefits of their work across millions or billions of machines.

Precisely because AI-related skills are currently scarce, the learning process for both humans and businesses will be costly. Machine learning courses have proliferated, both at universities around the world and online. Andrew Ng’s online course at Stanford had taught 4.7 million students by April 2022. But that represents only a fraction of the global workforce. The majority of the workforce was trained decades ago, which translates to a need for retraining and reskilling. Our industrial education system is not designed for that. Businesses should not expect the system to change quickly enough to supply them with the workers they need to compete in the AI age. The policy challenges are not simple: increased education is costly. Such costs need to be paid, either by higher taxes or by businesses and individuals directly. Even if the costs could be easily covered, many middle-aged people might not be eager to return to school. The people most hurt by skill-biased technology might be the least prepared for lifelong education.

Will a Few Huge Companies Control Everything?

It is not just individuals who are worried about AI. Many companies are terrified that they will fall behind their competitors in securing and using AI, which is at least in part due to the possible scale economies associated with AI. More customers mean more data, more data means better AI predictions, better predictions mean more customers, and the virtuous cycle continues. Under the right conditions, once a company’s AI leads in performance, its competitors may never catch up. In our Amazon predictive-shipping thought experiment in chapter 2, Amazon’s scale and first-mover advantage could conceivably generate such a lead in prediction accuracy that competitors would find it impossible to catch up.

This is not the first time that a new technology raises the possibility of breeding large companies. AT&T controlled telecommunications in the United States for more than fifty years. Microsoft and Intel held a monopoly in information technology in the 1990s and 2000s. More recently, Google has dominated search, and Facebook has ruled social media. These companies grew so large because their core technologies allowed them to realize lower costs and higher quality as they scaled. At the same time, competitors emerged, even in the face of these scale economies; just ask Microsoft (Apple and Google), Intel (AMD and ARM), and AT&T (almost everybody). Technology-based monopolies are temporary due to a process that economist Joseph Schumpeter called “the gale of creative destruction.”

With AI, there is a benefit to being big because of scale economies. However, that doesn’t mean that just one firm will dominate or that even if one dominates, it will last long. On a global scale, that is even truer.

If AI has scale economies, that will not affect all industries equally. If your firm is successful and established, chances are prediction accuracy is not the only thing that made it successful. The abilities or assets that make it valuable today will likely still be valuable when paired with AI. AI should enhance an airline’s ability to provide personalized customer service as well as to optimize flight times and prices. However, it’s not at all obvious that the airline with the best AI will have such an advantage that it will dominate all its competitors.

For technology companies whose entire business might rest on AI, scale economies might result in a few dominant companies. But when we say scale economies, how much scale are we talking about?

There is no simple answer to that question, and certainly we have no accurate forecast with respect to AI. But economists have studied scale economies of an important complement to AI: data. While many reasons might explain Google’s commanding 70 percent market share in search in the United States and 90 percent in the European Union, a leading explanation is that Google has more data for training its AI search tool than its rivals. Google has been collecting such data for many years. Furthermore, its commanding market share creates a virtuous cycle on data scale that others may never match. If there are data-scale advantages, Google surely has them.

Two economists—Lesley Chiou and Catherine Tucker—studied search engines that took advantage of differences in data-retention practices.9 In response to the EU’s recommendations in 2008, Yahoo and Bing reduced the amount of data they kept. Google did not change its policies. These changes were enough for Chiou and Tucker to measure the effects of data scale on search accuracy. Interestingly, they found scale didn’t matter much. Relative to the overall volume of data that all the major competitors used, less data did not have a negative impact on search results. Any present effect was so small as to be of no real consequence, certainly not the basis of a competitive advantage. This suggests that historical data may be less useful than many suppose, perhaps because the world changes too quickly.

However, we offer an important caveat. As many as 20 percent of Google searches each day are said to be unique.10 Accordingly, Google may have an advantage on the “long tail” of rarely searched for terms. Scale advantages to data are not dramatic for the common cases, but in highly competitive markets like search, even a small advantage in infrequent searches may translate into a larger market share.

We still don’t know if the scale advantage of AI is big enough to give Google an advantage over other large players like Microsoft’s Bing or if Google is better for reasons that have nothing to do with data and scale. Given this kind of uncertainty, Apple, Google, Microsoft, Facebook, Baidu, Tencent, Alibaba, and Amazon are investing heavily and competing aggressively to acquire key AI assets. Not only are they competing with each other but with businesses that don’t yet exist. They worry that a startup will come along that “does AI better” and competes directly with their core products. Many startups are trying, backed by billions in venture capital.

Despite these potential competitors, the leading AI companies might get too big. They might buy out the startups before they become a threat, stifling new ideas and reducing productivity in the long run. They might set prices for AI that are too high, hurting consumers and other businesses. Unfortunately, there is no easy way to determine if the largest AI companies will get too big and no simple solution even if they do. If AI has scale advantages, reducing the negative effects of monopoly involves trade-offs. Breaking up monopolies reduces the scale, but scale makes AI better. Again, policy is not simple.11

Will Some Countries Have an Advantage?

On September 1, 2017, Russian president Vladimir Putin made this assertion on the significance of AI leadership: “Artificial intelligence is the future, not only for Russia, but for all humankind…. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”12 Are countries able to benefit from AI scale economies the way companies can? Countries can design their regulatory environment as well as direct government expenditure to accelerate the development of AI. These targeted policies might give countries, and the businesses located in them, an advantage in AI.

On the university and business sides, the United States leads the world in terms of both research on and commercial application of AI. On the government side, the White House published four reports in the final two quarters of the Obama administration.13 Relative to other areas of technological advance, that level of effort and coordination represents a significant government focus on AI. Under the Obama administration, almost every major government agency, from the Department of Commerce to the National Security Agency, was ramping up for the arrival of commercial-grade AI.

However, the trend lines are changing. In particular, the world’s largest country, the People’s Republic of China, stands out for its success in AI, compared to its technological leadership over the past century. Not only are two of its AI-oriented tech firms—Tencent and Alibaba—in the top twelve in the world by valuation, but evidence suggests that its scientific push in AI may soon lead the world. For example, China’s share of papers at the biggest AI research conference grew from 10 percent in 2012 to 23 percent in 2017. Over the same period, the US share fell from 41 percent to 34 percent.14

Will the future of AI be “made in China,” as the New York Times proposed?15 Beyond scientific leadership, at least three additional reasons point to China becoming the world leader in AI.16

First, China is spending billions on AI, including big projects, startups, and basic research. One city—China’s eighth largest—has allocated more resources to AI than all of Canada. “In June, the government of Tianjin, an eastern city near Beijing, said it planned to set up a $5 billion fund to support the AI industry. It also set up an ‘intelligence industry zone’ that will sit on more than 20 square kilometers of land.”17

Research is not a zero-sum game. More innovation worldwide is good for everyone, whether the innovation occurs in China, the United States, Canada, Europe, Africa, or Japan. For decades, the US Congress worried that American leadership in innovation was under threat. In 1999, Michigan 13th District Representative Lynn Rivers (a Democrat) asked economist Scott Stern what the American government should do to address the increases in R&D spending by Japan, Germany, and others. His response: “The first thing we should do is send them a thank you letter. Innovative investment is not a win-lose situation. American consumers are going to benefit from more investment by other countries…. It is a race we can all win.”18 If the Chinese government is investing billions in and publishing papers about AI, then maybe a thank-you card is in order. It is making everyone better off.

In addition to investment in research, China has a second advantage: scale. Prediction machines need data, and China has more people to provide that data than anywhere else in the world. It has more factories to train robots, more smartphone users to train consumer products, and more patients to train medical applications.19 Kai-Fu Lee, a Chinese AI expert, founder of Microsoft’s Beijing research lab, and founding president of Google China, remarked, “The U.S. and Canada have the best AI researchers in the world, but China has hundreds of people who are good, and way more data…. AI is an area where you need to evolve the algorithm and the data together; a large amount of data makes a large amount of difference.”20 The data advantage only matters if Chinese companies have better access to that data than other companies, and evidence suggests they will.

Data access is China’s third source of advantage. The country’s choices with respect to privacy protection for its citizens may give the government and private-sector companies a significant advantage in the performance of their AIs, especially in the domain of personalization. For example, one of Microsoft’s most high-profile engineers, Qi Lu, left the United States for China, seeing it as the best place to develop AI. He commented, “It’s not all technology. It’s about the structure of the environment—the culture, the policy regime. This is why AI plus China, to me, is such an interesting opportunity. It’s just different cultures, different policy regimes, and a different environment.”21

This is certainly the case for pursuing features like facial recognition. China, in contrast to the US, maintains a massive centralized database of photos for identification. This enables companies like Chinese startup Face++ to develop and license a facial recognition AI to authenticate the driver for passengers using Didi, the largest ride-hailing company in China, and also to transfer money via Alipay, a mobile payment app used by more than 120 million people in China. This system relies entirely on its facial analysis to authorize payment. Furthermore, incumbent Baidu is using a facial recognition AI to authenticate customers collecting their rail tickets and tourists accessing attractions.22 By contrast, in Europe, privacy regulation makes data access far more stringent than elsewhere, which may shut out European firms from AI leadership altogether.

These factors may create a race to the bottom as countries compete to relax privacy restrictions to improve their AI position. However, citizens and consumers value privacy; it’s not a regulation that only companies care about. There is a basic trade-off between intrusion and personalization and a potential for customer dissatisfaction associated with acquiring user data. At the same time, a potential benefit arises from being better able to personalize predictions. The trade-off is further complicated because of a free-riding effect. Users want better products trained using personal data, but they prefer that data be collected from other people, not them.

Again, it isn’t clear which rules are best. Computer scientist Oren Etzioni argues that AI systems should not “retain or disclose confidential information without explicit approval from the source of that information.”23 With Amazon Echo listening to every conversation in your house, you want some control. This seems obvious. However, it isn’t so simple. Your banking information is confidential, but what about the music you listen to or the television shows you watch? At the extreme, whenever you ask Echo a question, it could respond with another question: “Do you approve giving Amazon access to your question in order to find an answer?” Reading all the privacy policies of all the companies that collect your data would take weeks.24 Each time the AI asks for approval to use your data, the product becomes worse. It interrupts the user experience. If people do not provide the data, then the AI can’t learn from feedback, limiting its ability to boost productivity and increase income.

There are likely to be opportunities to innovate in a way that assures people as to their data’s integrity and control while allowing the AI to learn. One emerging technology—the blockchain—offers a way of decentralizing databases and lowering the cost of verifying data. Such technologies could be paired with AI to overcome privacy (and indeed security) concerns, especially since they are already used for financial transactions, an area where these issues are paramount.25

Even if enough users provide data so AIs can learn, what if those users are different from everyone else? Suppose only rich people from California and New York provide data to the prediction machines. Then the AI will learn to serve those communities. If the purpose of limiting the collection of personal data is to protect the vulnerable, then it opens up a new vulnerability: users won’t benefit from the better products and greater wealth that AI enables.

The End of the World as We Know It?

Is AI an existential threat to humanity itself? Beyond simply whether one might get an uncooperative AI like HAL 9000 (in 2001: A Space Odyssey), what apparently keeps some very serious and smart people like Elon Musk and Bill Gates up at night is whether we will end up with something like Skynet from the Terminator movies. They fear that a “superintelligence”—to use the term coined by Oxford philosopher Nick Bostrom—will emerge that pretty quickly sees humanity as a threat, an irritant, or something to enslave.26 In other words, AI could be our last technological innovation.27

We are not in a position here to adjudicate this issue and cannot even agree among ourselves. But what has struck us is how close to economics the debate actually is: competition underpins it all.

A superintelligence is an AI that can outperform humans in most cognitive tasks and can reason through problems. Specifically, it can invent and improve itself. While science fiction author Vernor Vinge called the point at which this emerges “the Singularity” and futurist Ray Kurzweil suggested humans are not equipped to foresee what will happen at this point because we are by definition not as intelligent, it turns out that economists are actually quite well equipped to think about it.

For years, economists have faced criticism that the agents on which we base our theories are hyperrational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have been on the right track. We already assume great intelligence in our analysis. We establish our understanding through mathematical proof, an intelligence-independent standard of truth.

This perspective is useful. Economics tells us that if a superintelligence wants to control the world, it will need resources. The universe has lots of resources, but even a superintelligence has to obey the laws of physics. Acquiring resources is costly.

Bostrom talks of a paper-clip-obsessed superintelligence that cares about nothing but making more paper clips. The paper-clip AI could just wipe out everything else through single-mindedness. This is a powerful idea, but it overlooks competition for resources. Something economists respect is that different people (and now AIs) have different preferences. Some might be open-minded about exploration, discovery, and peace, while others may be paper-clip makers. So long as interests compete, competition will flourish, meaning that the paper-clip AI will likely find it more profitable to trade for resources than fight for them and, as if guided by an invisible hand, will end up promoting benefits distinct from its original intention.28

Thus, economics provides a powerful way to understand how a society of superintelligent AIs will evolve. That said, our models do not determine what happens to humanity in this process.

What we have called AI in this book is not general artificial intelligence but decidedly narrower prediction machines. Developments such as AlphaGo Zero by Google’s DeepMind have raised the specter that superintelligence might not be so far away. It outperformed the world champion–beating AlphaGo at the board game Go without human training (learning by playing games against itself), but it isn’t ready to be called superintelligence. If the game board changed from nineteen by nineteen to twenty-nine by twenty-nine or even eighteen by eighteen, the AI would struggle, whereas a human would adjust. And don’t even think of asking AlphaGo Zero to make you a grilled cheese sandwich; it’s not that smart.

The same is true for all AI to date. Yes, research is underway to make prediction machines work in broader settings, but the breakthrough that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. In a policy document prepared by the Executive Office of the US President, the National Science and Technology Council (NSTC) Committee on Technology stated, “The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades. The NSTC Committee on Technology’s assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy.”29 At the same time, several companies with the expressed mission of creating AGI or machines with human-like intelligence, including Vicarious, Google DeepMind, Kindred, Numenta, and others, have raised many millions of dollars from smart and informed investors. As with many AI-related issues, the future is highly uncertain.

Is this the end of the world as we know it? Not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decisions with regard to AI.

When we move beyond prediction machines to general artificial intelligence or even superintelligence, whenever that may be, then we will be at a different AI moment. That is something everyone agrees upon. When that event occurs, we can confidently forecast that the economics will no longer be so simple.

KEY POINTS

  • The rise of AI presents society with many choices. Each represents a trade-off. At this stage, while the technology is still in its infancy, there are three particularly salient trade-offs at the society level.
  • The first trade-off is productivity versus distribution. Many have suggested that AI will make us poorer or worse off. That’s not true. Economists agree that technological advance makes us better off and enhances productivity. AI will unambiguously enhance productivity. The problem isn’t wealth creation; it’s distribution. AI might exacerbate the income inequality problem for two reasons. First, by taking over certain tasks, AIs might increase competition among humans for the remaining tasks, lowering wages and further reducing the fraction of income earned by labor versus the fraction earned by the owners of capital. Second, prediction machines, like other computer-related technologies, may be skill-biased such that AI tools disproportionately enhance the productivity of highly skilled workers.
  • The second trade-off is innovation versus competition. Like most software-related technologies, AI has scale economies. Furthermore, AI tools are often characterized by some degree of increasing returns: better prediction accuracy leads to more users, more users generate more data, and more data leads to better prediction accuracy. Businesses have greater incentives to build prediction machines if they have more control, but, along with scale economies, this may lead to monopolization. Faster innovation may benefit society from a short-term perspective but may not be optimal from a social or longer-term perspective.
  • The third trade-off is performance versus privacy. AIs perform better with more data. In particular, they are better able to personalize their predictions if they have access to more personal data. The provision of personal data will often come at the expense of reduced privacy. Some jurisdictions, like Europe, have chosen to create an environment that provides their citizens with more privacy. That may benefit their citizens and may even create conditions for a more dynamic market for private information where individuals can more easily decide whether they wish to trade, sell, or donate their private data. On the other hand, that may create frictions in settings where opting in is costly and disadvantages European firms and citizens in markets where AIs with better access to data are more competitive.
  • For all three trade-offs, jurisdictions will have to weigh both sides of the trade and design policies that are most aligned with their overall strategy and the preferences of their citizenry.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset