C:UsersSteveAppDataLocalMicrosoftWindowsTemporary Internet FilesLowContent.IE5JHDEWW3AMC900254380[1].WMF

Chapter 2

Bottom Line Pay-offs: Eleven Financial Cases

Chapter 2 describes eleven cases with specific quantifiable results illustrating how the practice of monetizing data management directly helped a number of organizations in tangible ways that were easy to relate to those in the C-suites. After all, these c-level executives are smart and, when you bring them a plan with good return on investment and results, the next question is usually: where do we go next? How many times do we have to spend that money? ($10 million annually)

As part of a state government, all agency employees were required to complete forms that fed the statewide time-and-leave tracking system. For a variety of reasons, the state system often produced information that was both late and inaccurate. Agency employees could not rely on the state’s system to meet requirements imposed on them. In typical fashion, the agency developed its own information technology-based, agency-wide time-and-leave tracking system. And surprise, surprise: the agency-level system also produced information that was often both late and inaccurate. But participation in both systems was mandated by various laws and regulations, so each workgroup developed a workgroup-based system that satisfied requirements for impromptu weather-based scheduling, something neither of the other two systems could accommodate. Now, you might think we are done here but, wait: individual workers maintained a fourth system to assist them with reconciliation of the workgroup solution to the agency- and state-level systems!

It was well understood that every agency employee spent at least 15 minutes each week attempting to maintain accurate individual time-and-leave records. In an attempt to justify development of a statewide solution, our team was assigned the task of documenting every individual who spent more than 15 minutes a week tracking time and leave. So the statement accompanying the chart declared: “At least 300 individuals spend at least 15 minutes per week managing time/leave data.” As data was collected, data reflecting pay rank was incorporated (see Figure 10 to appreciate the comprehensiveness of this reference table), thus permitting computation of some of the pay devoted to each individual spending time managing time and leave data. (We couldn’t directly identify each employee’s specific pay but, knowing a specific grade and using the lowest pay on that grade, we were able to calculate a minimum cost that no one disputed.) The result was a calculation of the minimum amount each district was spending on time- and leave-based data management.

14614.png

Figure 10 Determining the minimum cost of employee time

In the sample shown (Figure 11), District-L had 73 employees tracking leave and another 50 tracking time. Knowing employee pay grade and the number of time and leave transactions processed twice-monthly, costs per transaction processed were produced. Summing the monthly district totals yielded a total cost of almost $10 million annually! These numbers helped management understand the various costs of things normally considered to be overhead expenditures and permitted them to begin the process of calculating the costs of complying with various regulations. More importantly, the leveraging power of data management was dramatically illustrated with the realization that a single system could eliminate 30% of the time required to adequately manage time-and-leave data and impact all 10,000-agency employees. The decision was a real no-brainer investment given that a $300,000 investment would clearly save the agency $10,000,000 annually.

16099.png

Figure 11 Minimal cost of activities performed monthly

Who’s doing what, and why? ($25 million annually)

An international chemical company with more than $1 billion annual sales focused on developing and manufacturing additives to enhance the performance of oils and fuels. To scientifically understand the enhancements to engine performance, such as cleaner burning fuel, more smoothly running engines and longer lasting machines, they were running hundreds of tests annually—with each test costing up to $250,000! Tests were planned and directed by a large staff of chemical researchers and scientists who relied on informal systems to retain facts regarding, for example, what tests were run using what engine and under what conditions. The researchers evolved their own data management processes and, not having any formal education in data management, they did so in a less than optimal manner.

To illustrate the organizational cost of these sub-optimal data management practices, consider that Dr. X (with 10+ years of experience) is paid approximately $100 thousand annually for her services as an employee. As our engineers worked with Dr. X and her colleagues, they were able to document the tasks and the related amount of time spent organizing data in preparation for analysis versus the amount of time actually spent analyzing data. These practices resulted in members of the group spending 80% of their time performing less-valuable data management tasks and only 20% of their time performing the chemical research they were hired to do.

We compiled a chart showing six types of data management challenges encountered by a large number of their most valued knowledge workers: the PhDs in Chemical Engineering:

  1. Manual transfer of digital data – numerous instances of highly educated individuals manually transferring digital data from one computing platform to another by rekeying the data (and introducing errors)
  2. Manual file movement/duplication – moving individual instances of electronic files (many of them spreadsheets) via disk or USB drive from workstation to workstation
  3. Manual data manipulation – various cut/paste /transformations applied without documentation or procedure when files were received from colleagues,
  4. Disparate synonym reconciliation – column labels and column values adjustments based not on objective criteria but on subjective comparison by receivers of the files
  5. Tribal knowledge requirements – uncountable erroneous tests caused by practically infinite number of requirements, such as “George always forgets to subtract the weight of the container” or “Susie doesn’t properly convert between metric and English measurements”
  6. Non-sustainable technology – the only database technology used by the department having gone out of business over a decade prior

Referred to as shadow IT, such data preparation efforts are, in reality, information technology projects being run external to centralized and authorized organizational information technology. Hence, shadow IT costs are generally unaccounted for, decision-making with regard to shadow IT projects tends to be subjective and these efforts fail to benefit from mature, centrally implemented information technology practices.

Given a $100 thousand annual salary, Dr. X’s employer was aghast to discover that 80% of her work hours were spent accomplishing tasks that should have been performed by an individual whose cost was 40% of Dr. X’s salary. And this same individual could handle the data management needs for a group of these PhDs, effectively freeing up Dr. X and her colleagues from data management tasks and resulting in tangible savings and greater-than-expected increases in individual, workgroup and organizational productivity.

As an additional bonus, knowledge workers produce more new ideas faster if they are able to maintain a closer, more focused concentration on their basic tasks by reversing the organizing to analyzing ratio described here. And, this list of benefits does not take into account the reduced risk of introducing errors and acting on erroneous information at the individual, workgroup and organizational levels. Integrating existing systems to easily search and find similar or identical tests specifically reduced expenses, improved the client’s competitive edge and customer service, increased time savings and improved operational capabilities.

According to our client’s internal business case development, the company expects to realize a $25 million gain in the group’s productivity each year, thanks to improved data management processes.

Three ERP cases that also apply to software application package implementation

Enterprise resource planning applications (ERPs) have achieved mixed results. Generally, they are perceived as difficult to implement properly. However, if an organization obtains desired efficiencies within five years, the project is considered successful. What follows are three different ERP stories that could also apply to virtually any packaged application (or as it is also known commercial off the shelf–COTS) software.

How much will the data conversion cost? (2 Years and $3,000,000)

The first example helped management understand the unreasonableness of a proposed project plan submitted by consultants on a government project. Figure 12 is a project artifact. The consultants underbid the cost of the data conversion (a common practice), including a laughable amount of time on the proposed project work breakdown structure for a task labeled “data conversion.”

Relevant project details illustrated one very specific problem space. At least 683 individual data items on the Payroll side and 1478 on the Personnel side had to be examined and potentially mapped into the target ERP (see Figure 13). The consultant’s proposal contained a section addressing this challenge with a two-person-month (40 person-days) resource. Doing the math makes the proposal unrealistic.

16114.png

Figure 12 Project estimate

14642.png

Figure 13 Counting data items to be converted

To map 2,161 attributes to 15,000 other attributes would require analysis rates of, on the:

  • source side: 2,000/40 person-days = 50 attributes/person-day, and
  • target side: 15,000/40 person-days = 375 attributes/person-day.

Adding the two factors yielded a requirement for these individuals to understand and handle 425 attributes every day for 40 days. The true impact of this requirement became obvious with the understanding that the data conversion team must handle and translate 53 attributes every 60 minutes in an eight-hour day. A final sigh escapes when the scope of the challenge was made explicit: Locate, identify, understand, map, transform, document and assure the quality of each attribute at a rate of 0.86 attributes per minute!

This is what we like to refer to as x-treme data management, and, of course, the chances of this occurring on time, within budget and with full functionality are exactly zero. What is really amazing is that everyone seems consigned to paying for the overruns. Of course, if you incentivize bad behavior, you shouldn’t be surprised when it occurs.

We contacted the previous three clients serviced by the proposed consulting team and found that, on average, the data conversion portion of the last three projects ran over by two years and resulted in more than $3,000,000 dollars in project overruns for the data conversions alone. To conclude this short tale, management pointed out this and other inconsistencies to the consultancy. We offered them a choice: submit a firm, fixed price for the data conversion task or resubmit the bid with more realistic numbers. This produced a vastly revised proposal that resulted in a lower overall total cost structure for the government.

How about measuring before deciding to customize? (If $1 million is substantial)

A second ERP example illustrates how good metadata management practices lead to a better decision-making process with respect to evaluating customization options. Much of this material is an extension of research originally published with our colleague, Lynda Hodgson (Billings, Hodgson et al. 1999).

Initial comparisons of legacy functionality to new and improved functionality (most often inappropriately made after product selection) indicated that operational processes that had been concisely addressed using a single screen in the legacy system now required operators to mentally integrate information across 23 screens! Faced with the reality of the complexity introduced, one scared executive asked the system integrators to advise as to how big a change was necessary to reverse the ERP’s designation of a single data element, PERSON-ID, and revert to a more familiar (but still improper) SSN as the system’s primary ID. Of course, the answer came back: Not a big change at all.

14649.png

Figure 14 Reverse engineering PeopleSoft’s ERP

To quantify that reassuring statement, we accessed our ERP metadata, having reverse-engineered it previously; see Figure 14 and (Aiken, Ngwenyama et al. 1999). This permitted us to understand which ERP components would require alteration in order to re-implement SSN as a centrally accessed data structure throughout the system. Understanding which processes were connected by panels to specific fields, permitted us to identify all of the required modifications (see Figure 15). These included:

  • 1,400 ERP panels (display screens);
  • 1,500 data tables; and
  • 984 business process component steps.

We applied a measure of $200 per hour to a total of 971 hours of labor (assuming 15 minutes required for each change) and came up with a total of at least $200 thousand worth of changes.

14656.png

Figure 15 PeopleSoft meta-model

Interestingly, this organization enforced a requirement that the cost of any modification to the new software must be factored into the implementation at a 5X cost. That is, if the cost of modifying the new software was $100 – the organization would add $100 X 5 = $500 to the implementation cost. This was done to provide a disincentive to modify the software.14

We wish more organizations would adopt this somewhat unusual but very reasonable practice as it forces organizations to consider that initial required modifications must also be implemented to each subsequent upgrade (in this case, five). This rightly raises the cost of customization to a more realistic amount number and forces organizations to more carefully consider the full cost of proposed modifications.

So, the initial cost (proposed at $200,000) was actually $1 million. This exercise added specific dollar costs to the comment, Not a big change at all! When we also pointed out the unreasonableness of assuming 15 minutes per change, the executive withdrew the request.

Is it really that complicated? ($5,500,000 and a person-century of labor savings)

Our third ERP/package software example illustrates how sound data management practices were used to support a hybrid manual-automated solution decision. Based on determining the point of diminishing returns, this final example, to our knowledge, is the first documented, data management savings of a person-century (as opposed to calling it “100 person years”).

A large government agency was preparing to implement its ERP. This organization managed over two million stock-keeping units (SKUs) in its catalog. The challenge presented was the existence of master data captured in clear text (also known as comments fields) in the legacy database. The master data arrived there when a previous group of consultants succeeded in getting Oracle, a relational database management system, to act as a hierarchical database management system – so there would be no need to change related software application processing! It was widely taken for granted that, because the organization’s master data existed in clear text fields, it would have to be extracted manually. As an alternative, we introduced an improvable text mining process that converted the non-tabular data into tabular data and verified its correctness against an evolving but verified set of master SKUs.

One of the big questions about improvable solutions is: when does one achieve good enough? Or to put it more clearly, how does one determine the point of diminishing returns? The answer is: when the value from additional effort exceeds the cost of the effort. In our solution, a fixed weekly cost was one-half of the weekly salaries of two data engineers conducting the text mining. (One-half salary was used because the engineers were also performing a number of related tasks focused on improving the overall quality of the dataset, and the actual mining process was an hours-long batch run.) By holding the development costs constant (1/2 the weekly salaries), it was possible to discern the relative pay-offs of specific data management driven decisions. The 18-week process is summarized in Figure 16.

14664.png

Figure 16 Calculating the point of diminishing returns

The initial goal had been to achieve a 50% reduction in the projected workload facing the team. After the first week, exactly zero items were matched (D3),15 but the first week’s results included identification of more than 26,800 records (or 1.34% of the total) that could be ignored (C3) from this point onward. The unmatched column (B3) indicated the percentage of the SKUs with no corresponding entry in the master file (31.47%). Over the next four weeks, our scoring of the unmatched items remained largely unchanged (B3 to B6), but the records we could identify as definitively ignorable increased to 11.99% (C6) and—important from a morale perspective—our matching jumped to over 55% (D6).

By the end of week four (row 6), the problem space had been reduced to two-thirds of the original challenge [55% (D6) + 11.99% (C6) = 66.99%], and the question now changed from Is this a good approach? To How long do we continue with this approach?

Optimization was still an important consideration. To address this question, the chart has been shortened, limiting the presentation of the measurement results to the last four weeks of results (rows 8-12) to show how the team, consisting of both clients and consultants, easily arrived at the decision point. The revised project goal was now to reduce the manual effort to as little as possible within the original project budget guidelines. Weekly progress was reviewed by the teams, and accomplishments were noted:

  • A decrease in unmatched items from 32% to 7.46% (B6 to B12); meaning that, in real SKU terms, of the original 2 million individual SKUs, only about 150,000 now required manual intervention. The text mining process, along with the automated workflow, addressed 92.5% of the original problem space.
  • An increase in the number of ignorable items from just under 11.99% (C6) to 22.62% C12). More than 450,000 SKUs were immediately ignorable, reducing the problem space by more than 20%.
  • An increase in the items matched from 55% (D6) to just under 70% (D12). This measurement indicated the successful creation of the resulting, high-value, golden master copy of the item master file.

This outcome permitted calculation of specific values such as per SKU and per person-year. The project team agreed that only marginal additional value could be achieved after week 18 and, thus, terminated refinement of the text mining at that point. These results were compared against the proposed manual extraction process. For the purposes of comparison, each person-year of effort was valued at $60,000, and the agreed-to total project value is shown in Figure 17.

14675.png

Figure 17 Calculating a person-century of savings

It was agreed that data management had contributed at least a cumulative value of $5.5 million when using a time measure of five minutes (circled on the figure) to review and cleanse each individual data item. Of course such a number was ludicrous, and that precise point was made during presentation of the results. Key, of course, was the observation that, if the time required was doubled to 10 minutes per SKU, it would require 186 person-years; if the time permitted was 15 minutes, it would require 279 person-years to accomplish the task—almost three person-centuries. More importantly, in our experience, very few data quality challenges requiring only 15 minutes to resolve have been encountered.

Real solution cost ($30,000,000 versus a roomful of MBAs)

On another adventure, we encountered a data warehouse for a healthcare provider. Our entrance into the organization was via the chief financial officer on behalf of the chief executive officer. The CEO wanted an independent evaluation of the project as it had completely overrun its initial budget. Further, the provider had sunk more than $30 million into its development. The dataset was daunting!

  • It attempted to completely describe each of the organization’s 1.8 million members.
  • It contained entries for 1.4 million providers. At this ratio, each provider would provide excellent service for each provider’s 1.3 patients. (Yes, 1.3.)
  • 800,000 of the providers had no key associated with their records, rendering them practically useless as they were immune to retrieval.
  • Only 2.2% of provider records had all of the required nine digits (in a field named PROV_NUMBER), which led to the question, why is 97% of the data in the warehouse inaccessible?

The real coup de grace, however, was that the entire warehouse had exactly one operational user. That’s right, just one! And of course the under-resourced area’s tardiness in producing required reports triggered the request for the independent evaluation.

“I could have assigned a roomful of MBAs and accomplished this analysis faster!” was the comment heard from the CEO, who, as it happened, was unsympathetic to information technology support for the project.

While it is important not to generalize from small samples, Gartner claims that “more than half of data warehouse/business intelligence projects will have limited acceptance or will be failures through 2013.”16 Of course, in this instance, it was easy to total the costs of these efforts using activity-based costing and other techniques and, with just one user, come up with a total cost of ownership—proving the CEO’s previous point.

Two tank cases

The next two cases illustrate challenges around two vastly different types of tanks.

Why are we spending money on stuff we can’t even use? ($5 billion)

A friend and colleague, Peter Benson, describes an airplane from a logistics information perspective: “An F-15 is just 171,000 parts flying in very close formation.17 This statement inspired an investigation by one branch of the Armed Forces. We would soon learn a very interesting fact: each tank purchased was really a collection of more than three million data values (Figure 18). And guess how many of those values reflected that the tank was actually obsolete? Well, the challenge was that no one really knew the answer.

Losing track of the significance of tracking materiel obsolescence, as well as the importance of managing the data that described tank, wound up costing the Service Branch funding while it spent unnecessary funds maintaining expired inventory during a time of war.

14686.png

Figure 18 Each tank comprised of 3 million data items

Carrying obsolete tanks generated unnecessary costs and negatively impacted warfighting efforts. Procuring parts for maintenance of obsolete materiel tied up monetary and human resources that could have been put to better and more relevant use. Undefined costs impacted the following operational areas:

  • Mission Readiness – resources focused on the non-value-added task of maintaining obsolete inventory, creating constraints on the agency’s main mission
  • Storage – the costs of physical structures and real estate needed to house items
  • Handling – transportation and human resources that were dedicated to moving, maintaining, counting and securing outdated inventory
  • Opportunity – inventory that could have been returned to the manufacturer or sold to free up financial assets for more necessary and critical supplies
  • Systemic – cost of maintaining inventory information and paper or electronic records which could have been used to support mission-critical acquisitions and distribution
  • Equipment Lifecycle Management – cost of maintaining and repairing obsolete items

The errors were discovered using various data analysis technologies that permitted tracking of individual tanks through their respective lifecycles. Application of data profiling resulted in some profound conclusions. We discovered that obsolete equipment literally worth $5 billion was being unnecessarily maintained as a result of poor data management practices. These funds have since been freed up and applied to more direct warfighting missions.

The vocabulary of tanks ($4 million alternative to software package customization)

Tanks were also important to a petroleum products company that made the decision to use an out-of-the-box ERP. It traded increased precision related to various uses of one term, tank, within the organization for a projected $4 million cost associated with modifying the ERP. After purchasing, but before installing, the new ERP software, the company discovered that the accounting module counted each transfer of a component from one tank to another tank as a retail transaction! This of course did not meet organizational needs, and they were faced with a dilemma, having only one of four options:

  1. Modify existing business practices.
  2. Request modifications of the new software – projected cost $4 million.
  3. Do some each of option 1 and option 2.
  4. Ignore the problem.

Most organizations do not make a formal choice from the above list. In this instance, following analysis, the organization committed to governing the data items so that software could be used without modification. Careful governance of the use of a single term tank, and all various sub-types of tank, was instituted throughout the organization. Confusing terms (a) would have led to non-sale internal transfers (that is, from one tank to another) being counted as additional retail sales and (b) could have forced the organization to restate its earnings. Now years past the initial decision, the organization calculates that it has saved not just the initial $4 million but subsequent millions annually in software customization and retrofitting costs that would have been required with each subsequent ERP upgrade.

The additional 45% is worth $50 million18

When you pack for a week’s vacation in Florida, you follow some simple steps. When you pack for a backpacking trip to Europe, you follow the same simple steps; however, you measure success differently. One success measure for the backpacking trip: everything must fit into backpacks because we must be able to carry them for long periods of time. This success measure is not even considered for a Florida trip.

When our organizations transform to a data-centric approach, we begin to measure success differently than we did before—same project, same process, but with different measures that include:

  • asking if our data is correct;
  • valuing data more than valuing “on time and within budget;”
  • valuing correct data more than correct process; and
  • auditing data rather than project documents.

As we do so, we see a shift as the entire organization begins to speak in terms of data.

While working for a Top 10 financial institution, the technology team was given a project to calculate a new field:

NEW field (E) = A + B/C

where A is sourced from 1 of 6 systems, B is another customer record sourced from 1 of 6 systems, and C is data provided by a vendor.

We planned this project like all others, using our project waterfall methodology and standard software development lifecycle development. As was typical, the team addressed an approved set of business requirements. During development, we identified problem loan records with different fields available for the calculation. Some loans were missing field A, others were missing field B, while others still were missing the vendor-provided value for C. Working within the requirements, we resolved the anomalies identified during development. We reached out to other systems to populate missing fields, and we created new matching routines to grab data from separate loan records. After months of effort and bringing resolution to everything we could, we were finally ready to go to user acceptance testing.

The team was asked one question before approval of the code to migrate to user acceptance testing: For how many loans were we able to calculate the new field?

The response: 43%.

Naturally, the question followed: Why only 43%? The response:

We still have bad and missing data. We resolved everything we could. We do not have the required fields populated for all loans; therefore, the calculation does not return a value. We know it sounds low, but we double-checked and all requirements have been satisfied.

As a result, the team was challenged to conduct a deeper dive and provide detailed metrics—not something developers typically do. They met the challenge and produced metrics designed around the missing data. With the deeper dive, we had exact numbers to share with the client. They agreed: 43% did not meet their expectation! In order to generate more accounts that calculated the NEW Field Calc(E), we collaborated with the client and the vendor. After several iterations, we attained the following results:

  1. We received updated business rules from the client to accommodate the “bad data,” and we were able to use alternate fields when field A, B or C was blank.
  2. We discovered and corrected a code error that caused a zero value in B for a select group of loan records. Field B was populated on many loan records; only a specific group of loans missed the calculation.
  3. We identified and had the vendor correct several errors on column C.

Figure 19 displays metrics tracked during rework with the client and the vendor. By the time we deployed to user acceptance testing, we were able to increase the rate of “Computed a valid New Field Calc(E)” from 43 % to 88% and explain every scenario of accounts unable to calculate the new field.

Image14696.PNG

Figure 19 Tracking metrics

Had we completed analysis using our typical project approach, we would have delivered a new, calculated field on 43% of production loans. We would have met all the requirements, passed user acceptance testing and deployed to production. And the project would have been declared a success since we finished on time and under budget.

However, with a data-centric approach and using data to measure success, we were able to increase the number of loan records with the new field from 43% to 88% and enable the client to realize an additional $50 million in savings. This was the result of only one project that calculated only one new field. Imagine the impact of implementing this approach for all enterprise projects.

What happened to our funding? (at least $1 million in government funding)

Funding for USAID has traditionally been determined based on personnel head counts. The problem? At one point in their history, USAID did not know how many people actually worked there, primarily because it lacked a standard definition for employee. This led to conflicting statements such as in 2009 when it was:

  • reported to the White House that they had 10,000;
  • reported to Congress that they had 11,000 employees; and
  • eventually determined, after a review of Microsoft licenses in an attempt to clarify these discrepancies was undertaken, more than 15,000 licenses existed.

If the cost of underreporting personnel head counts was only $1,000/employee, the organization was missing more than $1 million annually. Missing out on funding due to a lack of quality data management also highlighted other important operational impacts: inaccuracy regarding just exactly who is employed and the qualifications of actual employees. In crisis situations, USAID must be able to respond quickly and send the right people to the right place at the right time, but a resulting lack of adequate funding prevented development of systems (such as one prototype shown as Figure 20) that better support USAID’s mission.

14702.png

Figure 20 USAID prototype screen

But data stuff is complicated; how do I explain it? (£500 really increased project clarity)

One of the most difficult tasks with any data related initiative is that of explaining (or, as it is more commonly known, selling) the project. In one of the most articulate examples witnessed, a group at BT (formerly British Telecom) created the animation below to educate everyone in the entire company about its master data management initiative known as Seven Sisters. With permission, we’ve selectively illustrated a copy of this gem in Figure 21 (posted on the Web at: http://www.datablueprint.com/aiken-book-monetizing-data-management.

16147.png

Figure 21 A small investment in a visual explanation of the technical approach

Take a minute to watch it. For a very small investment, BT was able to hire talent that quite articulately transferred the entire dialog into a 44-second message; it was a bit of fun as well. The entire text of the Flash™ presentation follows. “Previously information was stored in hundreds of databases across BT. Processes were duplicated. Effort was wasted. It was all a bit messy. Now we are sorting all of our data—making it much easier to deal with.”

The animation closed with a pictorial description of data being sorted into seven piles so clearly laid out that non-information technology staff was able to identify a number of the desired master data management stacks. This small investment of a few British pounds proved the beginning of an invaluable communication channel and set the bar high with respect to communications coming from information technology.

 

 

 

 

14 Note the 5X cost figure is only valid if future modifications are unaffected by changes to the software package; modifications to the package made by the vendor would likely cause this figure to increase.

15 Cell identifiers appear within parentheses.

18 This section was authored by Linda Bevolo and edited by the co-authors who are indebted to her for such a substantive and timely contribution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset