Iterating on models through A/B testing

In the examples above and in the previous chapters of this volume, we have primarily examined analytical systems in terms of their predictive ability. However, these measures do not necessarily ultimately quantify the kinds of outcomes that are meaningful to the business, such as revenue and user engagement. In some cases, this shortcoming is overcome by converting the performance statistics of a model into other units that are more readily understood for a business application. For example, in our preceding churn model, we might multiply our prediction of 'cancel' or 'not-cancel' to generate a predicted dollar amount lost through subscriber cancellation.

In other scenarios, we are fundamentally unable to measure a business outcome using historical data. For example, in trying to optimize a search model, we can measure whether a user clicked a recommendation and whether they ended up purchasing anything after clicking. Through such retrospective analysis, we can only optimize the order of the recommendations the user was actually presented on a web page. However, it might be that with a better search model, we would have presented the user a completely different set of recommendations, which would have also led to greater click-through rates and revenue. However, we cannot quantify this hypothetical scenario, meaning we need alternative methods to assess algorithms as we improve them.

One way to do so is through the process of experimentation, or A/B testing, which takes its name from the concept of comparing outcomes from test subjects (for example, customers) randomly assigned to treatment (for example, a search recommendation algorithm) A and B to determine which method generates the best result. In practice, there may be many more than two treatments, and the experiment can be randomized at the level of users, sessions (such as periods between login and logout on a website), products, or other units. While a truly comprehensive discussion of A/B testing is outside the scope of this chapter, we refer interested readers to more extensive references (Bailey, Rosemary A. Design of comparative experiments. Vol. 25. Cambridge University Press, 2008; Eisenberg, Bryan, and John Quarto-vonTivadar. Always be testing: The complete guide to Google website optimizer. John Wiley & Sons, 2009; Finger, Lutz, and Soumitra Dutta. Ask, Measure, Learn: Using Social Media Analytics to Understand and Influence Customer Behavior. " O'Reilly Media, Inc.", 2014).

Experimental allocation – assigning customers to experiments

You have an algorithm that you wish to improve—how can you compare its performance improving a metric (such as revenue, retention, engagement) in comparison to an existing model (or no predictive model at all)? In this comparison, we want to make sure to remove all potential confounding factors other than the two (or more) models themselves. This idea underlies the concept of experimental randomization: if we randomly assign customers (for example) to receive search recommendations from two different models, any variation in customer demographics such as age, income, and subscription tenure should be roughly the same between the groups. Thus, when we compare the performance of models over time between groups following this random allocation, differences in performance of the algorithms can be attributed to the models themselves, as we have already accounted for other potential sources of variation through this randomization.

How can we guarantee that users are assigned to experimental groups randomly? One possibility is to assign each member a random number between 0 and 1, and split them based on whether this number is greater than 0.5 or not. However, this method might have the downside that it will be difficult to replicate our analysis since the random number assigned to a user could change. Alternatively, we often will have user IDs, random numbers assigned to a given account. Assuming the form of this number is sufficiently randomized, we could take the modulus of this number (the remainder when divided by a fixed denominator, such as 2) and assign users to the two groups based on the modulus (for example, if 2, this would be 0 or 1 based on if the account ID is even or odd). Thus, users are randomly allocated to the two groups, but we can easily recreate this assignment in the future.

We might also consider whether we always want a simple random stratification. In the ad-targeting example discussed previously, we are actually more concerned with the performance of the algorithm on nonsubscribers, rather than the existing users who would comprise most of a random allocation in our example. Thus, depending upon our objective, we may want to consider randomly allocating stratified samples in which we oversampled some accounts to compensate for inherent skew in the data. For example, we would enforce a roughly equal number of accounts per country to offset geographical bias toward a more populous region, or equal numbers of teenage and adult users for a service with primarily younger users.

In addition to randomly assigning users to receive an experience (such as search recommendations or ads sent through e-mail) dictated by a particular algorithm, we often need a control, a baseline to which to compare the results. In some cases, the control might be the outcome expected with no predictive model used at all. In others, we are comparing the old predictive model to a new version.

Deciding a sample size

Now that we know what we are trying to test and have a way to randomly assign users, how should we determine how many to allocate to an experiment? If we have a control and several experimental conditions, how many users should we assign to each group? If our predictive model relies upon user interaction (for example, gauging the performance of a search model requires the user to visit a website) that may not be guaranteed to occur for every member of the experimental population, how many activities (for example, searches) do we need to accumulate to judge the success of our experiment? These questions all concern making estimates of effect size and experimental power.

As you may recall from statistics, in a controlled experiment we are trying to determine whether the differences in outcomes between two populations (for example, the revenue generated from groups of users in our experimental evaluation of different prediction algorithms for ad targeting) are more likely due to random change or actual differences in the performance of an algorithm. These two options are also known as the null hypothesis, often represented by H0 (that is, there is no difference between the two groups) and the alternative represented by H1. To determine whether an effect (for example, the difference in revenue between the two groups) is explained by random chance, we compare this effect to a distribution (frequently the t-distribution, for reasons we will discuss below) and ask what is the likelihood of observing this effect or greater if the true effect is 0. This value—the cumulative probability of an effect greater than or equal to the observed given an assumption of no effect—is known as the p-value, to which we often apply a threshold such as 0.05 (in the example below, this is indicated by the shaded region on the left side of the standard normal distribution).

Deciding a sample size

When we are assessing this statistical significance, we may encounter two kinds of errors because of the fact that any measurement of effect is subject to uncertainty. We never really know the true value of an effect, but rather measure this real, unknown effect with some error. First, we could inaccurately declare a result to be statistically significant when it is not. This is known as type I error (false positive). Secondly, we could fail to declare a result statistically significant when it actually is (also known as Type II error, or false negative).

We can ask the question how many samples we need to declare a particular effect (for example, revenue difference) significant, if there really were a difference between our two populations. While exact applications may vary, we will assume for illustration that the two groups are sufficiently large and that any difference between measured average values (such as revenue or click through rate) follows a normal distribution, which is due to the Law of Large Numbers. We can then evaluate this difference using the t-distribution, which approximate the standard normal distribution for large samples but does not require that we know the population mean and variance, just the mean and variance of a sample. Then, calculating the necessary number of samples requires just using the following formula (for a t-test between samples of whose variance is potentially unequal, also known as Welch's t-test):

Deciding a sample size

Here, Y is the average effect (for example, revenue per customer) of each group, and S (the standard deviation) is given by the following equation:

Deciding a sample size

Here, S1 and S2 are the sample variances, and n1 and n2 are the sample sizes of the two groups. So if we want to be able to detect a difference of 10, for example, with a p-value of 0.05, we solve for the sample size at which the t-statistic under the null yields a false positive 5% of the time (for which we use the normal approximation of 1.64, which is the value at which the cumulative distribution function of the standard normal distribution assumes the value of 0.05). We can solve:

Deciding a sample size

Thus, given values for the variance of the groups in our experiment, we can plug in different values of n for the two groups and see if they are sufficient to fulfill the inequality. For this application, we might estimate the variance by looking at historical data for revenue among users of a given sample size.

If you look at the right-hand side of the preceding equation carefully, you will see that (assuming reasonably similar sample variances, which is not an unreasonable assumption in many large scale experiments such as those conducted on consumer website) this value will be determined by the smaller of n1,n2, since as we increase one sample size the term containing it tends toward 0. Thus, we often achieve optimal power by assigning equal sample sizes to both groups. This fact is important in considering how to decide the relative sizes of our control and experimental cells. Take an example in which we have three version of an ad-targeting algorithm, along with no algorithm at all as a control, and measure the resulting click through rate. Based on the preceding calculation, we need to decide what our main question is. If we want to know if any algorithm is better than no algorithm, we should assign users evenly between control and any of the three algorithm variants. However, if we instead want to decide which algorithm is best compared to control, we want equal numbers of users in all four cells, so that control and each treatment are of approximately equal size.

Note that the preceding calculation assumes we are interested in a fixed difference of 10 in response between the two groups. We could also just ask whether there is any difference at all (for example, the difference is not zero). The choice depends whether any lift represented by the algorithm is valuable or whether a fixed improvement is necessary to achieve the business goal at hand.

Multiple hypothesis testing

The last topic we will cover is somewhat subtle, but important due to the fact that with models with numerous tunable parameters and algorithm variations, we may often be performing a large number of hypothesis tests within a single A/B experiment. While we might evaluate each test at a significance of 0.05, if we perform 20 such evaluations, we have a probability of 20*0.05 = 1 (or almost certainty) of finding some significant result, even if it is in truth random noise. This issue, known as Multiple Hypothesis Testing, requires that we may need to recalibrate our significance threshold. The simplest way to do so is to divide the p-value threshold we use (for example, 0.05) by the number of tests performed (20) to obtain a new threshold for significance. This is known as Bonferonni Correction (Dunn, Olive Jean. "Estimation of the medians for dependent variables." The Annals of Mathematical Statistics (1959): 192-197; Dunnett, Charles W. "A multiple comparison procedure for comparing several treatments with a control." Journal of the American Statistical Association 50.272 (1955): 1096-1121) and, while correct, may be overly conservative in some scenarios. It assumes that we want a type I (false positive) rate of zero. However, in exploratory analyses, we often can accept some nonzero false positive rate as long as we are reasonably sure that a majority of the significant results are replicable. In this scenario, a familywise error rate (FWER) approach may be preferable. While a discussion of FWER is outside the scope of this chapter, we refer the interested reader to references on the subject (Shaffer, Juliet Popper. "Multiple hypothesis testing." Annual review of psychology 46 (1995): 561; Toothaker, Larry E. Multiple comparison procedures. No. 89. Sage, 1993).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset