Chapter 6

You’re Going to Need a Geek: Introduction to Analyzing Data

In This Chapter

arrow Using statistics to analyze data

arrow Recognizing how your data varies among customers

arrow Understanding connections between customer traits

arrow Designing effective tests

arrow Using data to predict customer behavior

The world of database marketing revolves around customer data. Lots of data. But data is just raw material. What you’re truly after is insight into your customers’ preferences, needs, attitudes, and behavior. In short, you’re after information. Information is the result of analyzing and organizing your raw data in the context of a particular business problem or opportunity. It’s information, not data, that drives decision making.

Customer data is messy. It’s often incomplete. Sometimes the accuracy of the data varies from customer to customer. Data for some customers may be old and outdated.

Analyzing data and teasing information from it can be a complicated and technical exercise. It requires some expertise in the field of statistics and data analysis to do it carefully and effectively. In other words, you will need the assistance of a geek. Being a full-fledged geek myself, I use this term endearingly.

In this chapter, I do not make any attempt to dive into the technical details of statistical geekdom. For that I refer you to Statistics For Dummies, 2nd Edition (Wiley, 2011). I only have space to attempt to introduce you to some key concepts that will help you to communicate effectively with your technical team. These concepts will help you be clear in how you ask for analytic assistance. They will also help you understand and critically evaluate the results of that analysis.

What Are Statistics, Anyway?

Turning raw data into meaningful and useful insights is what the field of statistics is all about. A statistic is essentially a measurement of something. More specifically, it’s a summary of several measurements. Some examples: A batting average is a statistic that purports to summarize how well a player hits. Intelligence quotients summarize the scores from a test. Political poll results summarize how a group of people answered certain questions. Stock market indices summarize the performance of a group of stocks.

The field of statistics is, to some extent, the black sheep of the mathematical sciences. A popular saying lists the degrees of dishonesty as “lies, damn lies, and statistics.” I once had a statistics student nickname the course “Sadistics.” She was responding to my introduction of the term mean, which is the technical name for a certain kind of average.

The fact is, in many cases, the conclusions reached by performing statistical analysis are just downright counterintuitive. Numerous studies show that even people who are well trained in statistics can be really bad at applying that training to real-world situations. In other words, when people try to intuitively interpret data, they are pretty bad at guessing.

The counter-intuitive nature of statistics leads inevitably to its misuse. I’ve heard people half-jokingly (or maybe not) claim that, given any set of data, they can make it say whatever they want it to. This is known as fudging the data. In its simplest form, fudging involves ignoring or excluding data that doesn’t support the desired conclusion.

People have a innate tendency to fudge data when it comes to their past experiences. Horoscopes, for example, are popular for precisely this reason. People look for and remember situations when their horoscopes was right on target.

The same sort of thing happens when people report their lottery winnings. People never forget the $1,000 prize they won two years ago and will tell the story over and over again. But they leave out the fact that they’ve spent $20 a week for 5 years on buying lottery tickets, which works out to more than 5 times their winnings.

warning_bomb.eps Fudged data is the enemy of good database marketing. It prevents you from learning what is and isn’t working. When you decide to analyze data regarding your marketing campaigns, you need to analyze all of it. You can’t pick and choose the results you want to see.

Despite having a somewhat spotty reputation, the science of statistics really is a science. Proper use of statistical techniques can bring some order to what at first appears to be a chaotic mass of data. Careful analysis can provide you with useful insights into your customers.

An example of the power of statistics when it’s properly used can be found in the gaming industry. Casinos are associated with gambling. But the casinos themselves are doing no such thing. They understand the statistics — the odds — associated with every game they operate. And they set the payouts.

If you’ve ever been to Vegas, you’ve seen countless advertisements about slot machine payouts. They entice you with claims that they pay out something close to 99 percent of what they take in. True enough. But that means the casino keeps 1 percent. And literally millions of silver dollars are dropped into those machines.

The business of database marketing uses statistics in a similar way. The idea is that you want to stack the odds in your favor when you choose whom to communicate with. The majority of the people who receive your offer may not respond. But you use the power of statistical methods to guarantee that enough will respond to cover the cost of your marketing efforts and provide a healthy profit to boot.

remember.eps The results of data analysis are often open to interpretation. Often the results are inconclusive. But respect whatever the data is or isn’t telling you. Don’t respond to disappointing results by asking that the data be analyzed differently. Trying to find what you expected to see in the data will not help you learn. And it won’t improve the effectiveness of your marketing efforts. Unless you have specific concerns about the way an analysis was done, trust what your analyst is telling you.

The Average Customer Doesn’t Exist: Understanding Variation

You encounter averages on a daily basis. You can watch the Dow Jones average bounce around on the ticker to your heart’s content. Athletes are judged on their batting averages, average points per game, or first-serve percentage. Endless studies report that Americans eat an average of so many pounds of beef, potato chips, or broccoli per year. But what do these averages actually tell us?

By themselves, they don’t really tell us a lot. As a database marketer, you’re much more concerned with understanding how certain traits vary from customer to customer. Statisticians call these traits variables. (This is one of those rare occasions when a technical term actually reflects what it means, so enjoy the moment.) Customer age, household income, number of children, and date of last purchase are examples of variables you frequently encounter.

Variables can vary in several different ways. In this section, I discuss some scenarios you’re likely to see in your attempts to understand your customer data. In comparing these scenarios, you’ll come to appreciate how little an average, taken by itself, is really worth.

Growing or shrinking: Variation over time

Suppose you’ve saved up $1,000 and you’re considering an investment in the stock market. Do you really care whether the Dow Jones average is at 15,000 or 1,500? The simple answer is no. What you really care about is whether it’s going up or down. I’m oversimplifying your decision, but you see the point.

Trends are sometimes more important than the actual value of an average. A trend represents the general direction that something is moving. Trends discount small fluctuations along the way. For example, I-95 extends from south Florida all the way up through Maine. You can find places along the way where the highway travels east, west, north, and everything in between. But the general trend is northeast.

Part of your job as a database marketer is to spot trends in your customer data. Detecting a potentially negative trend early on gives you the opportunity to intervene. Recognizing a positive trend allows you to encourage it and “ride the wave.”

A few years ago I was doing some work for a bank. We began looking through several years’ worth of customer data. One observation that popped out was that the average age of their customers was increasing steadily. This became deeply concerning when they realized that the customer base was aging quite a bit faster than was the nation as a whole.

There were a number of alarming factors here. Obviously, mortality being what it is, it meant a shrinking customer base. But this trend also explained why deposits and loans weren’t growing. As the customer base aged, more customers were on fixed incomes. And older customers also tend not to take out loans or carry balances on their credit cards. All bad news for the bank’s bottom line.

This discovery lead the bank to begin actively pursuing younger customers. They targeted college students and young professionals with marketing programs. This is a case where the marketing database yielded insights that changed the entire corporate marketing strategy.

tip.eps By continually tracking key customer traits over time, you can respond to trends as they are happening. Develop tracking reports and run them on a regular basis. Even just taking a monthly or quarterly look at the state of your customer base can help you spot potential problems or opportunities in time to act.

The average car has 3 doors: Variation in groups

Imagine there are 20 vehicles in a parking lot. You decide to count the passenger doors on each vehicle. You find there are seven pickups with two doors, three two-door coupes, and ten four-door sedans. Adding all that up gives you 60 doors. Divided by 20 means a mean of 3 doors per car. Clearly, this average is not reflective of the vehicles in the parking lot — in fact, not a single vehicle has three doors.

A more useful way of looking at that data is to graph it. Figure 6-1 is a graph showing how many cars have two, three, and four doors. This graph is much more informative and useful than the average. It clearly shows that the cars fall into two distinct groups.

9781118616017-fg0601.eps

Illustration by Wiley, Composition Services Graphics

Figure 6-1: The distribution of cars according to how many doors they have.

Figure 6-1 is an example of a histogram. A histogram is a type of graph that shows you the distribution of a variable across its various values. It gives you a much better sense of what’s really going on in your data than a mean does.

tip.eps In your explorations of your customer data, you won’t find a variable whose values are grouped symmetrically around the mean. The famous bell curve is a fundamental part of statistical theory, but you will never run across one in your marketing database. Always look at how the data is actually distributed.

My car door example is completely made up. But it’s typical of a pattern that you’ll frequently see in your data. Your geek may refer to this pattern as a bi-modal distribution. Bi-modal means there are two bumps in the distribution where the data is congregated. Distributions can sometimes have more than two such bumps.

tip.eps Bi-modal distributions are often signals that you’re dealing with two distinct customer behaviors or motivations. You may find that customers buying a particular product are largely grouped in the 20-something and 60-something age groups. In this case, you clearly don’t want to be targeting the average — that is, 40-somethings. But you can develop separate marketing strategies for the two groups that reflect their differences.

Misleading averages: Wide variation

You will run across situations where your data distribution seems to go on forever. Income data is like this. The vast majority of households have incomes in a fairly narrow range. Certainly, the percentage of households making less than a million dollars accounts for almost everyone. But no matter how high you go, $10 million, $100 million, even $500 million, you will still not have accounted for every single household. This situation is commonly called a long-tailed distribution. These distributions make averages extremely misleading. The reason is that data way out yonder in the distribution contributes a lot more to the average than data at the bottom.

A simple calculation will illustrate my point. Suppose you have 100 people making $50K and 1 person making $10 million. This gives a total of $15 million. That comes out to an average income of just over $148,500. This is three times what anyone really makes. And this misrepresentation is being caused by one data point.

tip.eps A long-tailed distribution is one instance where ignoring data is a good idea. When performing analysis on these types of distributions, it’s all right to throw out the extreme data points, called outliers. If you don’t want to throw them out completely, then at least cap them at some reasonable level so they don’t muddy up the works.

You’ll find that very wide distributions arise frequently in looking at behavioral data. I recently looked at some annual pass data for an entertainment company. Some people used the pass only once. The vast majority used it less than ten times. But I found passes that were used well over 200 times.

In situations like that, it’s impossible to graph the whole distribution in a meaningful way. If you group the data into wide ranges, you don’t see the meaningful variation at the bottom. I illustrate this in Figure 6-2.

9781118616017-fg0602.eps

Illustration by Wiley, Composition Services Graphics

Figure 6-2: Usage data grouped by wide ranges.

The better alternative is to cap the distribution at some fairly early value and create a create a bar for “everything else” (see Figure 6-3).

9781118616017-fg0603.eps

Illustration by Wiley, Composition Services Graphics

Figure 6-3: The same usage data viewed in a more useful way.

Now you can see that there is actually a bi-modal distribution at the lower end. Lots of customers use their pass only once, and there’s another spike centered around five uses.

The bubble on the right isn’t really a bubble. If I actually continued to graph out the entire distribution it would go on for several pages and no page would have more than a handful of customers represented on it. But this not-really-a-bubble does give you a sense of how many customers are using their passes a lot.

This distribution suggests that, if you were this entertainment company, you’d have two different marketing opportunities. First, you’d want to get the single-use customers to come back. You’d need to figure out why these folks aren’t returning and try to overcome those barriers. Second, you’d want to maximize your revenues from the second group. You might do this by communicating special events or keeping them informed of what’s new. The high-use customers probably don’t need a lot of additional database marketing attention.

My point here is that, in this example, you’ve identified three distinct groups of customers. And you’ve done that by looking at only one variable. Now you can dig deeper into the data about each group separately and develop marketing campaigns to address each one.

remember.eps Understanding the way data varies among your customers or over time helps you to identify marketing opportunities. It allows you to group your customers together in meaningful ways.

I explain some more advanced approaches to grouping customers in Chapter 7. But first I want to look at some other aspects of basic data analysis.

Looking for Relationships in Your Data

Customer data is interrelated. It may seem at first glance that age and income represent two completely different aspects of a customer. But a relationship emerges when you look across your database as a whole. You’ll find that as customers age, their incomes tend to go up as well.

Connections between customer traits

This tendency for two traits to share a common tendency is known as correlation. Such tendencies may be strong or weak or nonexistent. People’s heights might be very strongly correlated with their mothers’. But they probably aren’t quite so strongly correlated with their great-grandmothers’. They probably have nothing at all to do with what day of the year they were born on.

These tendencies may also be positive or negative. People’s total debt tends to go down as they get older and pay off mortgages and other loans. This is an example of a negative correlation.

Understanding cause and effect

warning_bomb.eps The existence of a statistical tendency does not, by itself, imply that one thing in any way causes another. I’m quite sure that there is a correlation between the number of cigarette lighters that a person buys and their risk of lung cancer. But it’s the cigarettes they also buy, not the lighters, that explains this tendency. The connection between lighters and lung cancer is known as a spurious correlation.

An example from my banking days involves a marketing program that was designed to increase deposits in CD accounts. We started to analyze the results of that program after it had been in market for a while. Initially, we noted that the number of CDs sold since the program was in market had jumped significantly. Great news! The campaign was working.

But when we tried to calculate the profit that had been generated by this wonderful campaign, we ran into a problem. Despite the fact that we were opening all these new accounts, the overall dollar volume hadn’t changed much.

After digging around a bit, we discovered that in order to support the CD campaign, the branch network had put an incentive program in place for tellers. This program, not surprisingly, rewarded them for opening CD accounts. But the rewards were based on the number of accounts they opened.

Armed with this little piece of information, we went back through the data and looked at the customers who were opening new CD accounts. It turns out that this wasn’t new business at all. Rather, the volume was due to expiring CDs that were being rolled over into new accounts. The tellers were simply rolling them over into multiple new accounts. A $20,000 CD was being rolled over into four $5,000 accounts.

Our initial excitement over the success of our marketing program turned out to be unjustified. We had mistaken the spurious correlation between our marketing campaign and the new accounts for cause and effect. The actual cause was the teller incentive program.

tip.eps You need to be careful about attributing cause and effect to correlations. This is especially true when you’re evaluating the success of your marketing campaigns. The best way to do this is to design your marketing campaigns in the same way that scientific experiments are designed. (I touch on this subject in the next section and I address measurement in detail in Chapters 14 and 15.)

Sometimes even spurious correlations can be useful

Spurious or not, you can take advantage of statistical tendencies to enhance the power of your marketing database. You will run into situations where you know or suspect that a particular customer trait is central to understanding customer behavior. The problem is that you don’t carry that trait in your database.

Here’s where correlations come in. You may very well have a variable in your database that is correlated with the trait you are interested in, called a proxy variable. Survey research often uncovers these kinds of correlations. There is also a great deal of demographic research in the public domain — census data, for example — that analyzes connections between variables. Chapter 19 talks about some resources that may be helpful in this regard.

tip.eps By replacing one variable with a different, correlated variable — called a proxy variable — you can essentially make use of information that you don’t actually have. The proxy variable certainly won’t be the same as actually having the information you want. But finding a proxy variable that’s highly correlated with the trait you’re interested in is the next best thing.

In the lighter versus cigarette example, it’s clear that attempting to reduce lung cancer rates by targeting the sale of lighters is misguided. The lighters aren’t the source of the problem. But if all you want is to identify people who are at risk of lung cancer, then lighter purchases would make a reasonable proxy.

Campaigns Are Experiments: Using the Scientific Approach

Measuring results is a fundamental part of database marketing. You are in a unique position to be quite precise in quantifying the effectiveness of your campaigns. You know exactly whom you contacted, when, and how. And you know who responded. This allows you to conduct your campaigns the same way a scientist would conduct an experiment.

Designing a measurable campaign: Control groups

When a drug company wants to test the effectiveness of a new drug, they don’t just give it to a bunch of people and see if they respond. They design an experiment where some people get the drug and some people get a benign, neutral substance that has no effect, called a placebo. This placebo group is known as a control group.

They basic idea is that they want to isolate and measure the effects of the drug and only the drug. It might happen that 5 percent of those taking the drug develop a rash. If 5 percent of the people taking the placebo also develop a rash, then it is not likely that the experimental drug was the cause.

Control groups play a central role in your measurement process. The idea is the same as in the drug experiment. Once you have identified your target audience for a particular campaign, you need to send some of them a “placebo.” Actually, you need to send some of them nothing at all. You just need to flag them in your database as members of the control group for this campaign.

When it comes time to analyze responses, you check to see how many customers from the control group responded without being contacted. This may sound silly. How would they respond if you didn’t even send them the offer? But remember that your company has other marketing initiatives out there designed to drive sales. I have personally never seen a control group that didn’t have at least a few responders.

You then compare the response rate of the control group with that of the group you actually mailed. This allows you to calculate how many of the responses can reasonably be attributed to your campaign.

I talk about measuring marketing campaigns in detail in Chapters 14-17. In this section, I simply introduce a couple of basic ideas related to campaigns as experiments.

Taking a sample: Selecting customers at random

After starting a new job as a database marketing analyst years ago, one of my first assignments was to do a response analysis on a fairly large marketing campaign. Taking everything they told me at face value, I compared response rates between the mail and control groups. Much to my surprise, the control group outperformed the mail group. And by a large margin.

I scratched my head and rooted around in the data for a couple of days. One thing I discovered was that response rates varied significantly by geography. Our brand was more established in some places than others. So I started asking about the control group selection. Who did it? How was it done?

Turns out, the company had recently hired a new vendor to execute its mailings. The previous vendor had always pulled its control groups for them, so they asked the new vendor to do so as well. But the new vendor didn’t really understand control groups. It was asked to hold out 20,000 names in a control group, so they simply peeled the first 20,000 names off the list.

Now, one thing that mail vendors do is prepare mail for bulk rates from the USPS. This involves, among other things, sorting the mail file by zip code. Our entire control group came from the top of a sorted list. Everyone in it lived in a small number of zip codes in a region that had an unusually high response rate. This made meaningful measurement of the campaign’s success impossible.

remember.eps Your control group needs to accurately reflect your target audience. If it doesn’t, then your experiment is flawed, and your measurements will be suspect or meaningless. The best way to ensure that your control group is representative of your target audience is to select its members randomly.

Selecting a group of customers at random is called random sampling. Creating random samples is a job for your technical team. Every list that’s pulled out of a database is sorted by some customer trait or other. That sorting can render your measurement plan completely ineffective.

It’s a good idea to have at least a general sense of how your technical team is selecting your control group. Database and analytic software, even spreadsheets for that matter, have the ability to generate random numbers. These numbers typically range from 0 to 1. To split the file in half, you simply generate a random number for each record. If the number is less than .5, you put it in the target audience. If it’s greater than .5, you put it in the control group. In Chapters 14-16, I talk in much more detail about creating and using random samples.

Looking for Significant Results

Every election year we are inundated with poll results. It seems like every day there is a new poll out. Each poll is followed by a debate about how to interpret the results. Part of this debate is spin doctoring. But part of it is rooted in statistics.

The results of each poll are accompanied by an estimate of the margin of error associated with that poll. Essentially, this margin of error measures how significant the results really are. Fifty-one percent of respondents might say they will vote for a particular candidate. But this doesn’t really mean much if the margin of error is 3 percent. Such a result is not statistically significant. What these results are actually saying is that somewhere between 48–54 percent of respondents will vote for that candidate. Not conclusive.

Being confident in your measurements

The error margins that are reported along with political polls are due to the fact that the polls are based on random samples. There’s certainly room to question the way these polls define an eligible respondent. But the error margins are related only to the size of those samples. These samples are quite small compared with the overall population. But large or small, whenever you estimate based on a random sample, you introduce the possibility of errors.

If you flip a fair coin ten times in a row, you don’t expect it to come up heads all ten times. This almost never happens. The key phrase here is almost never. If you flip a coin enough times, it’s eventually going to come up heads ten times in a row. That’s just the nature of random variation.

What does this have to do with marketing, you ask? When you run a campaign, you randomly hold out a control group. This allows you to measure how many campaign responses were directly due to your communication. You compare your response rate to the number of responses in the control group.

Because the control group was selected randomly, it is possible that by pure chance it isn’t really representative of the overall audience. Luckily, you can, or rather your geek can, calculate exactly how likely it is for this to happen.

That calculation results in a confidence level for your response results. This is a measurement of how likely it is that your results are due purely to chance. In the coin flip example, it’s a measure of how often you would expect to get ten heads in a row. Results that have sufficiently high confidence levels are considered statistically significant.

remember.eps In the worlds of statistics and science, results need to have a confidence level in excess of 95 percent to be considered significant. This means that there is only a 5 percent chance, or 1 in 20, that the results are due to purely to chance. But because you’re doing marketing, not medical research, it’s reasonable for you to treat 90 percent confidence as significant. But anything lower than 90 percent should be treated as inconclusive.

remember.eps Paying attention to confidence levels keeps you focused on what actually is working. It also makes your financial calculations extremely credible. You can literally say with 95 percent confidence that your campaign made money for your company.

Sizing your control group

Getting significant results is not a crap shoot. You can stack the deck in your favor from the beginning. The size of your control group is, in a sense, the determining factor in whether you can report high confidence in your response rates.

remember.eps Essentially, larger control groups lead to higher confidence levels.

There is a trade-off here, though. Control groups also represent lost opportunities. Sometimes control groups need to be quite large. To the extent that your campaign is successful, not mailing to the control group costs you responses.

Your geek can help you to determine the appropriate number of customers to hold out in the control group. You will need to provide two pieces of ­information:

check.png The response rate you expect.

check.png How much you think your campaign will increase responses.

Clearly, both these estimates are guesses on your part. Campaign history is a good place to get a sense of what to expect. Experience — yours or someone who has executed campaigns in your industry — is really your only guide to estimating campaign response rates before the fact. Over the years, I’ve seen campaigns with response rates that range from a fraction of a percent to above 50 percent.

If you’re reasonably close in your estimates, a control group can be sized that will greatly increase your chances of seeing significant results. I talk more about predicting in a later section.

Developing measurement plans for your campaigns is a core part of your job as a database marketer. Good measurement leads to learning, which in turn leads to better results. I talk in detail about campaign measurement in Chapters 14 and 15.

Multitasking: Combining Customer Traits

Earlier in this chapter I talk about understanding the variation in your customer data. That section focuses on looking at one customer trait, or variable, at a time. But the heavy duty power of data analysis really comes into play when you start looking at multiple traits at once. This is known as multivariate analysis.

Finding useful groupings of customers

Looking across multiple customer traits at once is not easy. For one thing, it gets complicated quickly. And the number of customers that share several traits in common gets small quickly.

You may have a lot of customers in their 20s, a lot who have kids, a lot who are married, and a lot with incomes between $40–50K. But if you search your database for customers who have all of these traits, you’ll be shocked at how few you find.

This is a universal problem in dealing with customer data, or almost any data for that matter. When you focus on grouping customers together based on the values of particular variables, you end up with a huge number of very small groups.

In marketing, you want to identify groups, or segments, of customers with an eye toward their common needs and preferences. Dividing your customers into groups in this way is known as segmentation. Because your segments are focused on customer needs, they don’t necessarily need to be completely uniform. The customers in a segment don’t need to be cookie cutter copies of each other.

Chapter 7 is all about developing useful customer segments. I discuss various types of data that can be used in these efforts. I also talk about some common schemes that are used frequently in marketing.

tip.eps Because customer segments are the result of some pretty advanced analytics, it often isn’t clear how the segments are defined. It may in fact be a rather complicated process to decide which segment a customer belongs in. Leave this to your technical folks. Concentrate instead on what the segments actually look like. In other words, focus on describing these customer groups. What do they have in common and how do the groups differ from one another?

One customer segment that’s common to almost all companies is the high-affinity customer. These are customers who are very loyal to your brand. This high-affinity segment is identified through analysis of past purchase data. But this segment is generally far from uniform with respect to age, lifestage, and other demographic data. The high-affinity audience for children’s toys includes both parents and grandparents, for example.

The crystal ball: Making predictions

Ultimately you want to know who is likely to respond to a given marketing campaign. Many statistical techniques can help you with this goal. Again, these techniques require some advance knowledge of data analysis, which should be left to your geek. But a couple of things are worth noting.

A statistically derived prediction is known as a predictive model. In database marketing, such models are generally used to predict responses to a campaign and are therefore called response models. To develop such a model, you need to have response data from previous campaigns.

As with the customer segments I discuss in the previous section, it is frequently not obvious why or how the model is making its prediction. This mysteriousness is typical of predictive models.

At some point in your life you have probably received a letter from your credit card company telling you that your interest rate has gone up or you need to start paying an annual fee. Beyond the bad news, these letters can be annoying for a different reason. It’s that sentence that says, “This action may be due to one of the following....” It then goes on to list a bunch of things like late payments or high balances, many or all of which don’t apply to you.

What’s going on here is that the credit card company is required to tell you not only that they are taking “adverse action,” but why. The problem is that the real reason they are taking adverse action is due to a statistical model, such as a credit score. And it isn’t easy to sort out exactly why such a model’s score went up or down.

You can certainly understand which variables the model is using. You can usually understand which ones are most important. But once everything gets thrown together, it’s best to just let the model tell you what it thinks. I talk in more detail about response models in Chapter 15.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset