images The Cloud Conundrum

images The world created and replicated enough data in 2011 to fill over 50 billion iPads—a cost equivalent to the GDP of the United States, Japan, China, Germany, France, the United Kingdom and Italy combined.1

When Jeff Bezos, CEO of Amazon.com, introduced the company's latest breakthrough, the Kindle Fire, in late 2011, the visionary declared it “the culmination of the many things we've [Amazon.com] been doing for 15 years.”2 With the Fire, Amazon planned to take on the 800-pound gorilla in the tablet category, the Apple iPad. Although the two devices may have appeared similar in form factor and functionality to the general public, the strategy behind each product couldn't have been more different. Apple had earned the coveted position as the most valued company in the world, with a $465 billion market capitalization that exceeded Exxon's $400 billion in early 2012,3 based on a strategy of delivering better machines (91% of its revenues are device sales, compared with just 6% from iTunes).4 In contrast, Amazon survived the web bubble burst to rise as the most successful online retailer based on a content approach—with almost half of its revenues derived from sales of media like books, movies, music, and television shows.5 With Apple emphasizing a download approach to its devices (thereby placing a premium on storage and memory performance), Amazon challenged this notion with a streaming strategy intended to disrupt the tablet category. As such, Amazon boldly entered the market with a retail price for its new Fire far below any established by Apple. The debate surrounding which was better—a cloud-based approach à la Amazon or a device-led strategy as espoused by Apple—was on.

The Fire was just the latest manifestation of an ongoing strategy years in the making. And, while Amazon's tablet relied on the cloud to serve insatiable consumers the latest streamed content on demand, the company had already established itself as a cloud provider to business customers with its Amazon Web Services portfolio. In an interview with Wired, Bezos recalled the genesis of the new venture for the company:

Approximately nine years ago we were wasting a lot of time internally because, to do their jobs, our applications engineers had to have daily detailed conversations with our networking infrastructure engineers. Instead of having this fine-grained coordination about every detail, we wanted the data-center guys to give the apps guys a set of dependable tools, a reliable infrastructure that they could build products on top of.

The problem was obvious. We didn't have that infrastructure. So we started building it for our own internal use. Then we realized, “Whoa, everybody who wants to build web-scale applications is going to need this.” We figured with a little bit of extra work we could make it available to everybody. We're going to make it anywaylet's sell it.6

And with the same pioneering spirit that became synonymous with Amazon's culture, the company allowed capital-constrained enterprises of all sizes to purchase IT services as easily as standard utilities. Rather than invest in costly servers for storage capacity or computing horsepower, businesses could simply choose to rent such capabilities from Amazon, leading the provider soon to generate more traffic through its cloud computing services than from its established global websites. In essence, those uncongested lanes on the virtual highway Amazon created to accelerate its own development cycles were monetized and consumed by companies needing an on-ramp—and quick exit—from an otherwise considerable investment in computer hardware that was often underutilized itself during their nonseasonal periods. And, true to Amazon's heritage of offering premium products at nonpremium prices (as Bezos told Wired, the company had become “very accustomed to operating at low margins”7), enterprises were afforded access to Amazon's on-ramp of cloud capabilities at extremely competitive rates. A win–win partnership was created in the marketplace, and Amazon quickly catapulted to annual revenues exceeding $500 million from the new line of business.8 But it was a well-publicized outage in April of 2011 that stole the headlines and overshadowed the success generated by the company in such a short period of time.

On April 21, 2011, in a series of events triggered by a mistake in redirecting existing traffic to a lower-capacity, rather than higher-capacity, redundant network while performing a routine network upgrade, the company soon found itself the victim of automated intelligence gone haywire. As any reputable cloud provider would, Amazon established redundancy algorithms throughout its network, whereby storage nodes were replicated to back up data. However, when it inadvertently directed traffic from a primary network route to a lower-capacity backup route, some affected storage nodes were unable to find their replicas. This triggered a re-mirroring storm, in which a large volume of storage nodes attempted replication concurrently, exhausting available capacity in one of Amazon's main serving areas. This chain reaction led to a customer outage of key Amazon cloud capabilities lasting approximately four days. At a cost of $5,000 per minute during an IT outage, as estimated by Emerson Network Power,9 the resulting implications to businesses most affected by the Amazon interruption were devastating. For one customer, in particular, the consequences were potentially life-threatening, as evidenced by his desperate blog posted to Amazon's site, which soon earned notoriety as it propagated virally as an example of what not to do in the cloud:

Life of our patients is at stake—I am desperately asking you to contact...

Sorry, I could not get through in any other way.

We are a monitoring company and are monitoring hundreds of cardiac patients at home.

We were unable to see their ECG signals since 21st of April.

Could you please contact us? ...

Or please let me know how can I contact you more ditectly [sic].

Thank you

Although the post drew the criticism of dozens of IT professionals perplexed by this customer's ignorance, the following response by a fellow IT professional sums up the consensus quite nicely:

Oh this is not good. Man mission critical systems should never be ran [sic] in the cloud. Just because AWS is HIPPA [healthcare] certified doesn't mean it won't go down for 48+ hours in a row.

And in less than 50 words, the opinion of one IT professional speaking on behalf of many in his community punctuated the point that makes the cloud in the enterprise so tricky. Although consumers may become irritated if a cloud outage disrupts their ability to stream the latest content to their Kindle Fire, a business with a “mission-critical” need facing such an interruption risks loss of productivity, data, or—even worse—customers. IT downtime costs businesses more than 127 million person-hours per year—an average of 545 person-hours per company—according to research among IT and business executives by CA Technologies. In the study, 35 percent of respondents indicated that IT downtime harms customer loyalty, and 44 percent stated that it damages staff morale. What's more, 87 percent of respondents said that failure to recover data would be damaging to their business, with 23 percent labeling such inability “disastrous.”10 When control over IT resources is abdicated to a third-party provider like Amazon, enterprises must consciously weigh the benefits of shifting the upfront capital investment burden to said provider versus the potential risk of outages should the infrastructure that is outside of the enterprise's jurisdiction fail. In addition, although many large companies, including Netflix, successfully weathered the Amazon storm because of their premeditated cloud redundancy planning, several others found themselves dead in the water.

Although the Amazon outage was journalistic fodder for days after the event, perhaps the bigger story resided in the fact that Amazon never violated its service level agreement (SLA) with its customers. In industry parlance, an SLA is a written arrangement between a supplier and a customer that defines critical aspects of a service, such as uptime and other performance metrics. If the SLA is violated, there is generally some form of compensation exchanged between supplier and customer to remunerate the latter for his or her losses. In Amazon's case, the company offers its cloud platform in multiple regions, with multiple availability zones within a region. According to the company's website at the time, customers who launched server instances in multiple availability zones could “protect [their] applications from failure of a single location.”11 At the time of the outage, Amazon's SLA committed to a 99.5 percent uptime for customers with deployments in more than one availability zone within a specific region. However, those who scrutinized the language of this SLA would have discovered that it applied only to the commitment of connecting and provisioning instances. In fact, the outage didn't have a negative impact on customers in this regard. Still, that left many customers exposed and unprotected when the outage affected other cloud activities not covered by the SLA and disabled by the interruption.

The cloud represents an interesting conundrum for enterprises. As the Amazon experience demonstrates, there is still much to be learned in this emerging space. Of course, “the cloud” itself is not a new concept. Its origination can be traced back to the 1960s, when John McCarthy suggested that “computation may someday be organized as a public utility.”12 Since then, the market has been flooded with seemingly unending variations of the concept delivered by multiple vendors, each with different capabilities and SLAs. The cloud has suffered a bit from its own success, leading to more providers hopping on the bandwagon and fragmenting the definition of “cloud” even further. A simple Google search on the term “cloud computing” renders more than 100 million hits. For definitional purposes, this chapter explores a very specific category within cloud computing, that of infrastructure as a service (IaaS). In nontechnical terms, IaaS allows users to purchase computer horsepower virtually. These virtual machines (VMs), as they are known in the industry, can be dynamically instantiated and extinguished based on the needs of the end user. In other words, rather than purchase hardware that is equipped with more storage or processing power needed for an immediate, although temporary, task, users may instead rent these capabilities from the cloud using just about any terminal at the endpoint as a “dumb device” or “thin client.”

Under the IaaS scenario, the user requires a network connection through which cloud resources are accessed. Here is where the fragmentation of cloud offerings by unique providers has created untapped opportunity in the market. Anyone who has a high-speed broadband connection at home can relate to the following example: When attempting to download bandwidth-hungry content from a site, the speed of one's connection becomes increasingly important. The slower the speed, the more frustrating is the experience. In reverse, the upstream speed becomes more critical when attempting to interact with the network, such as when uploading a large file or engaging in other “high-twitch” activities. Ask a hard-core gamer how important a highly responsive network connection (termed low “latency” in industry terms) is to her online passion, especially in fast-paced first-person shooter gaming varieties, and she will likely offer an enthusiastic response. In business terms, latency translates into far more than simply failing to fire a shot before one's enemy does in an online game; latency can deteriorate business value and revenues. In an Amazon study, the cloud provider found that an additional 500 milliseconds in completing a Google search results in 20 percent less traffic for the search engine. An additional 100-millisecond delay on the provider's own retail site converts to a 1 percent decrease in sales.13 In other words, the speed and latency of the network have a profound impact on one's experience (a concept well understood by consumers) and one's business value (a fact recognized by companies).

Now imagine a business user attempting to access greater compute or storage resources on demand. Historically, these IT-related infrastructure components were the domain of data center companies—providers with significant computing platforms capable of renting their unused IT capacity on demand to interested buyers. It isn't far-fetched to suggest that, when one is in need of greater storage capacity and/or higher computing processing power, the network is affected in the exchange. Yet many of these companies ignored the network element completely, forcing enterprise users to understand the implications of renting IT resources and the associated impact to network demand. Even more confounding, those important SLAs that establish the covenant between buyer and seller often partitioned the cloud into distinct stove-pipes—with IT SLAs provided by data center companies and network SLAs offered by service providers. In essence, buyers were forced to stitch together a cloud value proposition from multiple parties and do so while keeping an end-to-end view of SLA requirements. Perhaps this challenge helps explain why, in a 2011 IDC study among CIOs, the top two factors inhibiting adoption of cloud services among respondents were (1) bandwidth and latency needs for specific applications (mentioned by 44% of respondents) and (2) service level guarantees (cited by 40% of respondents). Even more revealing, these concerns topped conventional cloud criticisms like security and data ownership, privacy, and compliance.

Although performance and SLAs topped the list of concerns in the IDC study, there are those persistent security questions that still hold sway. In a global study of 573 business and technology executives commissioned by IT services provider Avanade in 2011, 51 percent of respondents cited security concerns as the top reason deterring their company's move to the cloud. There appeared more to the concern than simple paranoia at play. Nearly one-fourth of respondents reported that their company had experienced a security breach with a cloud service. Putting their money where their mouth is, 20 percent of respondents admitted to turning off a cloud service in their organization and returning to on-premise services, with security concerns the biggest driver of this move.14

images

If performance and security weren't enough, there's always the dreaded outcome of cloud sprawl to dissuade otherwise eager prospects from jumping on the cloud bandwagon. The fragmentation in the industry that has done so much to complicate the definition of cloud has a more practical consequence in the enterprise. In the same Avanade study, 60 percent of respondents were worried about unmanaged cloud sprawl—the use of multiple cloud platforms by multiple vendors with little to no management in how these clouds interoperate, if at all. To this point, more than one in four respondents reported not having a centralized system to identify and track their IT cloud service providers.15 The problem of cloud sprawl is further exacerbated when one mixes private and public clouds in the soup. Public clouds are just that—they are offered by service providers (like Amazon) that lease “public” facilities to buyers with a temporary need or looking to conserve cash up-front. In contrast, private clouds are built using an enterprise's own resources in its data center and leveraged across its key internal constituents, leaving the company in control but also shouldering the overhead costs. There are pros and cons to both approaches, leading many enterprises to pursue a mix of private and public clouds. Although private clouds offer greater control and reuse of infrastructure (enterprises using private clouds have been shown to increase utilization of their existing assets from about 40% to 75% or more, with detailed insight of exactly how their infrastructure is being used across their organization),16 they don't offer the same speed or agility offered by their public competitors. All of this complexity often results in multiple clouds being used for different purposes or organizations, leading to the unenviable cloud sprawl position in which enterprises increasingly find themselves.

Despite the cautious attitude of many IT professionals, it isn't stopping the average enterprise employee from adopting cloud alternatives with enthusiasm. In the 2012 Alcatel-Lucent study, 35 percent of workers admit to using Dropbox (a cloud-based storage service) for work purposes. One-third of respondents confess to using Amazon's web-based services, apparently undeterred by the well-publicized outage of 2011. In addition, although two in five large enterprises in the study have policies that restrict web applications or personal technologies (largely because of concerns for data protection and the infiltration of viruses and malware to the corporate network), as evidenced by the research, even these rules are no match against an employee's tenacity in using such services.

The confluence of all these factors—the fragmentation of cloud offerings with unique SLAs, the balancing act of protecting corporate assets while securing IT services on tap, and the unencumbered will of increasingly tech-savvy employees—makes the cloud a huge source of consternation for IT professionals. In the 2012 Alcatel-Lucent study, the cloud ranked as the biggest concern among IT managers and professionals, with 25 percent of respondents ranking it their top issue, even above the consumerization of IT trend and the strategic role of the IT function. Still, the promise of the cloud is hard to resist, even among enterprise leaders who arguably bear the most responsibility in protecting the firm's prized assets. As one respondent in a related Alcatel-Lucent study of more than 200 enterprise professionals in the United States put it:

Moderator: Anything else that's intriguing to you that you're not ready to bring into the firm?

Respondent: Cloud. I think Cloud is probably the latest thing on everyone's mind. You look at it, and in theory it's a great idea, but just like everything that first comes out it needs some time to be developed and to be beneficial. But sure, think of the storage you would save, and think of how you can just go in and get what you need from this one area. It's just a matter of how could you protect it?

It's this constant tug-of-war that leaves the cloud market open to even more opportunity than the exponential growth seen in the category to date. The reality is that enterprises are still proceeding very cautiously into the cloud. Some are outright retreating because of poor experiences. Despite a market that is conservatively estimated to exceed $20 billion worldwide by 2014 according to analyst IDC (and that's just for the IaaS cloud flavor discussed in this chapter),17 an industry more than 50 years in the making still appears to be in its infancy. To back up this controversial claim, in 2011, Alcatel-Lucent commissioned research covering more than 1,000 IT decision makers in the United States to explore specifically the issues and opportunities still existing in the cloud landscape. Whereas more than 80 percent of respondents reported using the cloud for at least one need (for the technophiles reading this, these cloud alternatives may also have included software-as-a-service ([SaaS] or platform-as-a-service [PaaS] varieties), only one in four report relying on the cloud for at least one “mission-critical” application. This finding is corroborated by another research study commissioned by the IT Governance Institute, which polled more than 800 executives around the world and found only 18 percent of those who currently outsource considering the cloud for mission-critical services (helping to explain how the aforementioned Amazon customer could be so excoriated by his IT peers for trusting his “mission-critical” application to a cloud). Yet, according to the 2011 Alcatel-Lucent study, if the performance, latency, and security considerations could be sufficiently addressed, 95 percent of respondents who could be convinced to migrate at least one mission-critical application to the cloud would also transfer an additional five to six applications. The result is a tsunami of new opportunities not yet tapped by cloud providers, anchored by mission-critical applications that stimulate the movement of other IT needs to a cloud environment. The result, based on Alcatel-Lucent analysis from the 2011 study, is a fivefold potential increase in addressable market opportunity not yet served by today's cloud alternatives.

The attractive market up for grabs by cloud providers does little to alleviate the concerns faced by the enterprises they hope to serve. For these companies, the tradeoff is an unenviable one—does one cede control of its IT infrastructure to gain capital and speed advantages, or retain control at the risk of competitive disadvantage? Unfortunately, the answer is not as black-or-white as the question. There are several implications that enterprises must consider when evaluating if a move to the cloud is right for their company:

  • Don't silo the cloud—Although many companies specialize in one aspect of the cloud (such as storage, computing power, or network resources), the reality is that an effective cloud solution requires both IT and network resources to work effectively. Focusing on one domain at the expense of the other (IT vs. network connectivity) can yield damaging consequences to well-intentioned enterprises eager to dabble in the cloud. Of the enterprises that use the cloud in the 2011 Alcatel-Lucent study, 28 percent indicated that cloud performance was the single attribute that needed the most improvement (including response time and end-to-end performance). This was the most popular option cited by these cloud users, beating out strong contenders like security, ease of use, and price in the debate. Harmonizing IT and network components gives these enterprises what they crave and fills a void that largely remains in today's market—an end-to-end cloud solution that seamlessly adapts when greater bandwidth, storage, and/or compute pressures are applied by users.
  • Don't underestimate SLAs—SLAs are the written covenant between provider and customer, and they are often the latter's only course of action when something goes wrong. If there is one given with the cloud (or any technology, for that matter), one can expect that things will go wrong. In the 2011 Alcatel-Lucent study, nearly two in five respondents indicated that they have experienced cloud outages ranging from frequent interruptions that are short in duration to infrequent disruptions with longer staying power. Nearly the same number report that their current cloud latency spans from uncomfortable to intolerable response times. Despite this, more than one in five enterprises are forced to monitor their SLA performance themselves, and more than one-third have no remediation when said SLAs are not met. SLAs may be common in the cloud market today, however, as the Amazon case illustrates, one must examine these contracts carefully to ensure understanding of the risks and benefits. Otherwise, it's Buyer Beware.
  • Don't ignore new business models—The cloud is certainly a technology, but it is more importantly a new business model. For the first time, small enterprises have a viable means to compete against their larger counterparts, who have previously been the sole beneficiaries of dedicated IT staff and resources and the resulting advantages. Although the cloud itself is a new business model for pay-as-you-go IT infrastructure and support, there are more business models yet untapped in this market. As the last chapter discussed, more employees are bringing their own devices to the enterprise with an expectation that these devices will function for work. In essence, these employees are subsidizing the costs of IT by procuring the device themselves and using it for work purposes. Alcatel-Lucent was curious to see if the same phenomenon might hold true for network-based services offered through the enterprise. Specifically, would cloud-based solutions, like a virtual desktop service that allows the user access to corporate resources through any device, be compelling enough that employees might be willing to help subsidize their costs? In the 2012 Alcatel-Lucent study, the answer was clear. One in four frontline workers was very likely to pay $5.00 per month to their company—taken as a paycheck deduction, for example—in exchange for such a service. Although $5.00 per month per employee may not seem huge at face value, the figure adds up quickly as the size of the enterprise grows. Furthermore, this treasure chest is in addition to the amount of money willing to be paid, in a more traditional sense, by business and IT decision makers allocating organizational budgets to cover the costs of the service. In other words, just as the cloud has morphed into several hybrid implementations (from public to private to hybrid clouds), the potential business models for funding different cloud-based services are equally diverse.
  • Don't overestimate policy—Companies are quick to respond to potential abuse by instituting policies. As previously discussed, policies are essential to establish the written code between employer and employee. But moving from definition to alteration of behavior requires policies that are accompanied by employee understanding and management enforcement. Ignoring these critical elements in effective policy execution exposes enterprises to risks thought to be contained simply because a “policy” exists. A report from Application Security found that 28 percent of enterprise data breaches are due to “management complacency or a lack of awareness.”18 If lack of knowledge or enforcement isn't the culprit, perhaps employee apathy is to blame. A Cisco study found that 56 percent of employees don't always comply with IT policies even though they are fully aware of them.19 Even if the communication or enforcement of policy is sufficient, sometimes the policy itself is misguided. Particularly when employees are fully informed of the potential consequences of a policy decision, management may find a different attitude among those they are most interested in influencing. The BYOD phenomenon is one such example. Based on the 2012 Alcatel-Lucent study, nearly three in five frontline workers prefer a policy that permits an employee to connect to the corporate network to access company resources (cloud) as opposed to allowing the company access to personal devices that may attempt to store such assets (BYOD). This percent is more interesting when one considers that the majority prefer the former despite potential network latency or performance issues. In fact, only one in four would accept a policy that allows employees to download corporate information to personal devices at the expense of also granting authority to the company to monitor the device's contents. Policies are a necessary evil to mitigate lawsuits and establish written rules between management and employees. That said, relying exclusively on policy to dictate acceptable technology behaviors may result in a blindside to the company or the employees it attempts to influence.
  • Don't miscalculate security—When important corporate assets and data are virtualized in the network, one must take precautions to ensure that such resources are protected, such as seeking multitenancy options that partition data from multiple customers in a cloud environment. In addition, although security will remain a top concern for enterprises considering the cloud, there is an argument to be made that suggests that the cloud may offer better security advantages in some cases, particularly when protecting the firm from its own employees. It's not necessarily the case that employees are malicious (although according to the book Essentials of Business Ethics, 75% of employees have stolen something from work in the course of their careers),20 but all too often, employees are simply careless. The increasingly mobile workforce simply compounds the risk that corporate information or assets will be left exposed. A Symantec MessageLabs Report indicates that mobile employees are 5.4 times more likely to access dangerous content than those in the office.21 Protecting corporate assets behind the perimeter of a cloud that is either bordered by multitenancy (in the case of a public cloud) or its own firewall (in the case of a private cloud) also mitigates the risk of such assets being exposed by innocent employees polluting the corporate waters by infecting or losing connected devices. Security is a double-edged sword, and the risks and benefits of both cloud- and client-based alternatives should be considered.

Although Amazon took it on the chin during the media hype surrounding its cloud outage, the company responded with a detailed post-mortem that established a new level of transparency for cloud and other service providers to emulate. The misstep itself was an anathema to the company's vigilant obsession with error avoidance, as Bezos remarked to Wired: “We really obsess over small defects. That's what drives up costs. Because the most expensive thing you can do is make a mistake.”22 Yet mistakes will be made, and cloud providers are no exception to the rule. Lew Moorman, chief strategy officer of Rackspace, another cloud service provider, compared the Amazon outage to an airplane crash—a major event with widespread damage. As Moorman astutely points out, however, airline travel is still safer than driving in a car—the metaphorical comparison of the cloud to privately owned infrastructure.23 No matter how much journalistic fodder such crashes may generate, it isn't doing much to temper the enthusiasm of eager passengers willing to get on the plane and take to the clouds, as demonstrated by the Amazon customer who trusted the cloud for his mission-critical healthcare service. Although ridiculed by his community for what many considered an ignorant move, perhaps the sentiment of an IT executive, a respondent in the 2012 Alcatel-Lucent study who also happens to be in the healthcare field, sheds some perspective on why the cloud will be significant for possibly another 50 years to come:

The healthcare industry is amazing evolving and changing on a daily basis. IT has to be able to keep up with this evolution. Our customers rely on us not just for a product but sometimes for a life or death solution. There is much more at stake than just revenue and we have to be successful at all things. The introduction of cloud computing will become the norm eventually in our business. No longer will we need to continue to purchase large storage servers, nor will we have to worry about running out of space. Our doctors can log from anywhere and access patient files or lab reports. Hospitals have to increase their customer base just as any business does. If we don't have cutting edge technology or meet the customers' needs they will go somewhere else. And not just down the street, they will go across the country or around the world to get the most effective healthcare that they need. We have to manage these customers' needs efficiently and financially effectively. This can't be done without highly effective technology.

Whether embraced by all functions of the enterprise or not, the cloud offers one such technology option to level the competitive playing field. What's more, it represents a new business model that will transform the way enterprises run on IT, thereby changing the complexion of the IT function and releasing new opportunities for an insatiably connected workforce. The terrain is not without its risks, although educated enterprises and the trustworthy providers that serve them have only scratched the surface of potential rewards.

MAKE HISTORY

In April of 2012, mobile history was made. That month, an application only 50 days in the making managed to break 50 million downloads and set a new record as the fastest-growing original mobile game of all time (even displacing “Angry Birds” as the top-paid application in Apple's App Store). “Draw Something,” the addictive Pictionary-esque mobile game in which users engage in a social gaming platform and guess each other's drawings while separated by space and time, grew from three drawings per second when the game launched to 3,000 drawings per second at the time of its record.24 Even the most adept soothsayers would have been hard pressed to predict such growth (even if they had, few would have given credence to the prognostication). A company confined to internal IT infrastructure would be even more challenged to rise to the capacity demands created by such explosive growth. In fact, if a company like Draw Something had been confined to its own infrastructure to support its users, the capital pressures would only have been exceeded by the time crunch itself in building such capacity. Thankfully, for Draw Something, such an outcome never had to be entertained. The company could scale up and down with cloud-based solutions as quickly as their demand ebbed and flowed (or, in this case, continued to ebb) to meet the insatiable appetites of its Picassos in the making. Luckily for the millions of rabid fans now addicted to the application, there's no shortage of drawings to be encountered anytime soon.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset