Chapter Eighteen

Epilogue

In writing this book we had to make some decisions. One decision we faced often was: should Method X appear in the book or not? Ultimately, we had to stop writing at some point, which meant omitting some interesting methods. As a partial remedy, we have written this epilogue. So while the following methods did not appear as chapters in the book, we recommend them for those readers who are eager to learn more and wish the book hadn’t ended just yet. However, there is always the hope of a second edition, so we welcome reader feedback and suggestions.

Analytic Hierarchy Process (AHP)

In the 1970s, Thomas Saaty invented his Analytic Hierarchy Process (AHP) to help decision makers make complex, multi-criteria decisions [68, 69]. For the method’s widespread use and impact, which includes governments and militaries worldwide, INFORMS (the Institute For Operations Research and Management Science) awarded Dr. Saaty and his AHP method its prestigious Impact Prize in 2008.

The heart of the AHP method is its reciprocal pair-wise comparison matrix, from which a rating vector is produced by computing the dominant eigenvector of this matrix. In this sense, AHP has a strong connection to the Keener method of Chapter 4. In another sense, AHP has a strong connection to the Massey method of Chapter 2. In a very clever analysis, David Gleich has shown that the geometric AHP method, which replaces the standard AHP’s arithmetic mean with a geometric mean, is mathematically equivalent to the Massey method [31]. The AHP method was applied to college football in [11] and Israeli soccer in [71].

The Redmond Method

In a 2003 Mathematics Magazine article [61], Charles Redmond introduced a rating method that is a natural generalization of the win-loss rating system. The Redmond method begins with the idea of a team’s average dominance that is computed by summing a team’s point differentials, both positive and negative, and dividing by the number of games that team played.

It was a tough choice to omit Redmond’s Method because it involves some interesting linear algebra. However it falls into the YAMM category (yet another matrix method). Its results are often in the same ball park as other YAMMs, but Redmond’s method is limited because it requires all teams (or competitors) to play the same number of games.

The Park-Newman Method

In [59], Juyong Park and M. E. J. Newman take a network approach to ranking U.S. college football teams. Their method considers both direct wins and indirect wins to compute both a win score and a loss score for each team. An indirect win of team i over team j occurs when a team i beats team k who beats team j. Thus, even though teams i and j did not play in a direct matchup, some information is still inferred from the indirect relationship of length 2. Length 3, 4, and higher relationships can also be considered, though each with successively discounted weight. The Park-Newman method uses some very elegant mathematics to consider relationships of all lengths. The user sets the discounting parameter that controls how much each length distance is downgraded. This method draws interesting connections to both the Markov method of Chapter 6 and the OD method of Chapter 7.

Logistic Regression/Markov Chain Method (LRMC)

The LRMC rating method developed by Sokol and Kvam [48] was designed to use point score information plus home court advantage to rank teams in college basketball. Their method has been successful at predicting games in the March Madness tournament and enabled many fans to win their office pools.

The Markov chain part of the LRMC method is similar in some respects to the Markov method of Chapter 6. The ultimate goal is the same—to calculate the stationary, or dominant, eigenvector of the Markov transition matrix. One difference is that the LRMC method uses logistic regression to cleverly estimate the elements in the Markov transition matrix, accounting for home court advantage. The authors of LRMC also show a nice connection between the LRMC and the Colley and Massey methods, which are built around the strength of schedule philosophy.

Hochbaum Methods

Dorit Hochbaum, an expert in the theory of optimization, has built several ranking methods using network optimization methods [2, 38, 39, 37]. Hochbaum has analyzed her methods with respect to their computational effort, complexity, and susceptibility to manipulation. These methods are adaptable given that the objective functions can be tailored as needed. When certain properties are satisfied, some of these optimization methods for ranking can compete, in terms of computation time, with linear-algebra based methods for ranking.

Monte Carlo Simulations

Simulation is a popular technique favored by many technicians, particularly those interested in analyzing baseball. Commercial sports forecasting companies such as Accuscore.com often use simulations as their primary tool. By using statistics compiled from past performances, a game between two teams can be simulated in the computer by running a Markov chain whose states are various aspects of the game (e.g., a hit against a given pitcher, a fly ball, a runner on first base being thrown out at second base given a hit to left field, etc.), and whose transition probabilities are constructed from past statistics. Simulating thousands of games between two teams and averaging the results is one way to produce ratings and make predictions. Simulation works pretty well when applied to baseball, but it is more or less on par with many of the less involved techniques covered in this book when applied to other sports, especially NFL football. Simulation is an interesting and somewhat deep subject that can fill a book by itself. The interested and more advanced reader will find many rich and varied discussions simply by doing a simple Google search.

Hard Core Statistical Analysis

We decided to forgo purely statistical methodology, which is probably a disappointment to hard core statisticians. Statistical analysis is a viable approach, particularly when ample statistics are available, and a tremendous array of statistical techniques can be brought to bear. But like simulation, statistical analysis is an area unto itself that can fill volumes, so we decided not to open Pandora’s box in this regard. It would nevertheless make for interesting comparisons between some of the algebraic methods contained in this book and those predicated on fitting distributions to observed data for the purpose of formulating ratings and rankings. Massey hints on his Web site that he now relies more on statistical techniques than on the algebraic methods described in Chapter 2.

And So Many Others

It would require many books to completely survey all of the rating and ranking models that have been proposed. The number of models for football alone is staggering. Listed below is a sample of the vast number of sources compiled by David Wilson. Many of these are available from the following Web site that was active at the time this was written.
         homepages.cae.wisc.edu/~dwilson/rsfc/rate/
biblio.html
.

• I. Ali, W. Cook, and M. Kress. On the minimum violations ranking of a tournament. Management Science, 32(6):660–672, 1986.

• B. Amoako-Adu, H. Marmer, and J. Yagil. The effeciency of certain speculative markets and gambler behavior. Journal of Economics and Business, 37, 1985.

• L. B. Anderson. Paired comparisons. Operations Research and the Public Sector: Handbooks in Operations Research and Management Science, S. M. Pollock, M. H. Rothkopf, and A. Barnett, eds., 6(Chapt. 17):585–620, 1994.

• David H. Annis and Bruce A. Craig. Hybrid paired comparison analysis, with applications to the ranking of college football teams. Journal of Quantitative Analysis in Sports, 1(1), 2005

• David H. Annis. Dimension reduction for hybrid paired comparison models. Journal of Quantitative Analysis in Sports, 3(2), 2007.

• David H. Annis and Samuel S. Wu. A comparison of potential playoff systems for NCAA I-A Football.

• Gilbert W. Bassett. Robust sports ratings based on least absolute errors. The American Statistician, May:1–7, 1997.

• R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: The method of paired comparisons. Biometrika (39):324–45, 1952. The idea is believed to have been first proposed in E. Zermelo in “Die berechnung der Turnier-Ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung” Mathematische Zeitschrift, 29:436–60, 1929.

• Hans Bhlmann and Peter J. Huber. Pairwise comparison and ranking in tournaments. The Annals of Mathematical Statistics, 501–510, 1962.

• Thomas Callaghan, Mason A. Porter, and Peter J. Mucha. Random walker ranking for NCAA Division I-A football. Georgia Institute of Technology, 2003.

• Thomas Callaghan, Peter J. Mucha, and Mason A. Porter. The bowl championship series: A mathematical review. Notices of the AMS (September)887–893, 2004

• C. R. Cassady, L. M. Maillart, and S. Salman. Ranking sports teams: a customizable quadratic assignment approach. Interfaces 35(6): 497–510, 2005.

• P. Y. Chebotarev and E. Shamis. Preference fusion when the number of alternatives exceeds two: Indirect scoring procedures. Journal of the Franklin Institute: Engineering and Applied Mathematics, 336(2):205-226, 1999.

• G. R. Conner, and C. P. Grant. An extention of Zemelo’s model for ranking by paired comparison. European Journal of Applied Mathematics, 11(3):225–247, 2000.

• W. Cook, I. Golan and M. Kress. Heuristics for ranking players in round robin tournaments. Computers and Operations Research, 15(2):135–144, 1988.

• Morris L. Eaton. Some optimum properties of ranking procedures. The Annals of Mathematical Statistics, July:124–137, 1966.

• Arpad E. Elo. The Rating of Chess Players Past and Present, 2nd Ed., Arco Publishing, 1986.

• L. Fahrmeir and G. Tutz. Dynamic stochastic models for time-dependent ordered pair-comparison systems. Journal of the American Statistical Association, 89:1438–49, 1994.

• Christopher J. Farmer. Probabilistic modelling in multi-competitor games. Univ. of Edinburgh, 2003.

• John A. Fluek and James F. Korsh. A generalized approach to maximum likelihood paired comparison ranking. The Annals of Statistics, 3(4)846–861, 1975.

• L. R. Ford Jr. Solution of a ranking problem from binary comparisons. American Mathematical Monthly, 64(8):28–33, 1957.

• Mark E. Glickman and H. S. Stern. A state-space model for National Football League Scores. Journal of the American Statistical Association, 93:25–35, 1998.

• S. Goddard. Ranking in tournaments and group decission making. Management Science, 29(12): 1384–1392, 1983.

• Clive Grafton. Junior college football rating systems. Statistics Bureau of the National Junior College Athletic Association, 1955.

• S. S. Gupta and Milton Sobel. On a statistic which arises in selection and ranking problems. The Annals of Mathematical Statistics, pp. 957–967, 1957.

• David Harville. The use of linear-model methodology to rate high school or college football teams. Journal of the American Statistical Association. June:278–289, 1977.

• David Harville. Predictions for NFL games via linear-model methodology. Journal of the American Statistical Association. September:516–524, 1977.

• David Harvilleand M. H. Smith. The home-court advantage: How large and does it vary from team to team? The American Statistician, 48(1):22–28, 1994.

• David Harville. College football: A modified least-squares approach to rating and prediction. American Statistical Proceedings of the Section on Statistics in Sports, 2002.

• David Harville. The selection and/or seeding of college basketball or football teams for postseason competition: A statistician’s perspective. American Statistical Proceedings of the Section on Statistics in Sports, pp. 1–18, 2000.

• David Harville. The selection or seeding of college basketball or football teams for postseason competition. Journal of the American Statistical Association, 98(461):17–27, 2003.

• Dorit S. Hochbaum. Ranking sports teams and the inverse equal paths problem. Department of Industrial Engineering and Operations Research and Walter A. Haas School of Business, University of California, Berkeley.

• Tzu-Kuo Huang, Ruby C. Weng, and Chih-Jen Lin. Generalized Bradley-Terry models and multi-class probability estimates. Journal of Machine Learning Research, 7:85–115, 2006.

• Peter J. Huber. Pairwise comparison and ranking: Optimum properties of the row sum procedure. The Annals of Mathematical Statistics, pp. 511–520, 1962.

• Thomas Jech. The ranking of incomplete tournaments: A mathematician’s guide to popular sports. American Mathematical Monthly, 90(4):246–66, 1983.

• Samuel Karlin. Mathematical Methods & Theory in Games, Programming, & Economics, Dover Publications, March 1992.

• L. Knorr-Held. Dynamic rating of sports teams. The Statistician, 49:261–76, 2000.

• R. J. Leake. A method for ranking teams: with an Application to college football. Management Science in Sports, ed. R. E. Machol et al., North-Holland Publishing Co., pp. 27–46, 1976.

• J. H. Lebovic and L. Sigelman., The forecasting accuracy and determinants of football rankings. International Journal of Forecasting, 17(1):105–120, 2001.

• Joseph Martinich. College football rankings: Do computers know best? Interfaces, 32(5):85–94, 2002.

• William N. McFarland. An Examination of Football Scores, Waverly Press, 1932.

• David Mease. A penalized maximum likelihood approach for the ranking of college football teams independent of victory margins. The American Statistician, November, 2003.

• Joshua Menke and Tony Martinez. A Bradley-Terry artificial neural network model for individual ratings in group competitions. Computer Science Department, Brigham Young University, 2006.

• D. J. Mundfrom, R. L. Heiny, and S. Hoff, Power ratings for NCAA division II football. Communications in Statistics Simulation and Computation, 34(3):811–826, 2005.

• Juyong Park and M. E. J. Newman. Network-based ranking system for U.S. college football. Journal of Statistical Physics, 2005.

• Michael B. Reid. Least squares model for predicting college football scores. University of Utah, 2003.

• Jagbir Singh and W. A. Thompson Jr. “A Treatment of Ties in Paired Comparisons.” The Annals of Mathematical Statistics, 39(6):2002–2015, 1988.

• Z. Sinuany-Stern. Ranking of sports teams via the AHP. Journal of the Operations Research Society, 39(7):661–667, 1988.

• Warren D. Smith. Rating systems for game players, and learning. NEC, July, 1994.

• M. S. Srivastava and J. Ogilvie. The performance of some sequential procedures for a ranking problem. The Annals of Mathematical Statistics, 39(3):1040–1047, 1968.

• Raymond T. Stefani. Football and basketball predictions using least squares. IEEE Transactions on Systems, Man. and Cybernetics, 7, 1977.

• Raymond T. Stefani. Improved least squares football, basketball, and soccer predictions. IEEE Transactions on Systems, Man, and Cybernetics, pp. 116–123, 1980.

• Hal Stern. A continuum of paired comparisons models. Biometrika, 77(2):265–73, 1990.

• Hal Stern. On the probability of winning a football game. The American Statistician, August:179–183, 1991.

• Hal Stern. Who’s number one? - rating football teams. Proceedings of the Section on Statistics in Sports, pp.1–6, 1992

• Hal Stern. Who’s number 1 in college football? . . . and how might we decide? Chance, Summer:7–14, 1995.

• H. S. Stern and B. Mock. College basketball upsets: will a 16-seed ever beat a 1-seed? Chance (11):26–31, 1998.

• H. S. Stern. Statistics and the college football championship. American Statistician, 58(3):179–185, 2004.

• H. S. Stern. In favor of a quantitative boycott of the bowl championship series. Journal of Quantitative Analysis in Sports, 2(1), 2006.

• Daniel F. Stone. Testing Bayesian updating with the AP top 25. John Hopkins University, October, 2007.

• I. B. Thomas. Method of ranking college football teams, Allen, Lane and Scott, 1922.

• Mark Thompson. On any given Sunday: Fair competitor orderings with maximum likelihood methods. Journal of the American Statistical Association, 70:536–541, 1975.

• Y. L. Tong. An adaptive solution to ranking and selection problems. The Annals of Statistics, 6(3):658–672, 1978.

• John A. Trono. Applying the overtake and feedback algorithm. Dr. Dobb’s Journal, February:36–41, 2004.

• John A. Trono. An effective nonlinear rewards-based ranking system. Journal of Quantitative Analysis in Sports, 3(2), 2007.

• Brady T. West and Madhur Lamsal. A new application of linear modeling in the prediction of college football bowl outcomes and the development of team ratings. Journal of Quantitative Analysis in Sports, 4(3), 2008.

• R. Wilkins. Electrical networks and sports competition. Electronics and Power, 29(5):414–418, 1983.

• R. L. Wilson. Ranking college football teams: A neural network approach. Interfaces, 25(16):44–59, 1995.

• R. L. Wilson. The “real” mythical college football champion. Operation Research / Management Science Today, pp. 24–29, 1995.

• R. A. Zuber, J. M. Gander, and B. D. Bowers. Beating the spread: Testing the efficiency of the gambling market for National Football League games. Journal of Political Economy, 93, 1985.

 

 

 

 

 

 

By The Numbers —

7.3 = the rating (on a scale of 1 to 10) received by the White Russian cocktail.
    —It is the #1 ranked drink recipe (out of 100), and it received 324 votes.

6.2 = the rating received by Sex on the Beach #2.
    —It is the lowest rated cocktail (#100), and it received only 29 votes.

www.drinknation.com/drinks/best

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset