REFERENCES

Adichie, J. N. [1967], “Estimates of regression parameters based on rank tests,” Ann. Math. Stat., 38, 894–904.

Aitkin, M. A. [1974], “Simultaneous inference and the choice of variable subsets,” Technometrics 16, 221–227.

Akaike, H. [1973], “Information theory and an extension of the maximum likelihood principle,” in B. N. Petrov and F. Csaki (editors), Second International Symposium on Information Theory. Budapest: Academiai Kiado.

Allen, D. M. [1971], “Mean square error of prediction as a criterion for selecting variables,” Technometrics, 13, 469–475.

Allen, D. M. [1974], “The relationship between variable selection and data augmentation and a method for prediction,” Technometrics, 16, 125–127.

Andrews, D. F. [1971], “Significance tests based on residuals,” Biometrika, 58, 139–148.

Andrews, D. F. [1974], “A robust method for multiple linear regression,” Technometrics, 16, 523–531.

Andrews, D. F. [1979], “The robustness of residual displays,” in R. L. Launer and G. N. Wilkinson (Eds.), Robustness in Statistics, Academic Press, New York, pp. 19–32.

Andrews, D. F., P. J. Bickel, F. R. Hampel, P. J. Huber, W. H. Rogers, and J. W. Tukey [1972], Robust Estimates of Location, Princeton University Press, Princeton, N.J.

Anscombe, F. J. [1961], “Examination of residuals,” in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California, Berkeley, pp. 1–36.

Anscombe, F. J. [1967], “Topics in the investigation of linear relations fitted by the method of least squares,” J. R. Stat. Soc. Ser. B, 29, 1–52.

Anscombe, F. J. [1973], “Graphs in statistical analysis,” Am. Stat., 27(1), 17–21.

Anscombe, F. J. and J. W. Tukey [1963], “The examination and analysis of residuals,” Technometrics, 5, 141–160.

Askin, R. G. and D. C. Montgomery [1980], “Augmented robust estimators,” Technometrics, 22, 333–341.

Askin, R. G. and D. C. Montgomery [1984], “An analysis of constrained robust regression estimators,” Nav. Res. Logistics Q, 31, 283–296.

Atkinson, A. C. [1983], “Diagnostic regression for shifted power transformations,” Technometrics, 25, 23–33.

Atkinson, A. C. [1985], Plots, Transformations, and Regression, Clarendon Press, Oxford.

Atkinson, A. C. [1994], “Fast very robust methods for the detection of multiple outliers,” J. Am. Stat. Assoc., 89, 1329–1339.

Bailer, A. J. and Piegorsch, W. W. [2000], “From quanal counts to mechanisms and systems: The past present, and future of biometrics in environmental toxicology,” Biometrics, 56, 327–336.

Barnett, V. and T. Lewis [1994], Outliers in Statistical Data, 3rd ed., Wiley, New York.

Bates, D. M. and D. G. Watts [1988], Nonlinear Regression Analysis and Its Applications, Wiley, New York.

Beaton, A. E. [1964], The Use of Special Matrix Operators in Statistical Calculus, Research Bulletin RB-64-51, Educational Testing Service, Princeton, N.J.

Beaton, A. E. and J. W. Tukey [1974], “The fitting of power series, meaning polynomials, illustrated on band spectroscopic data,” Technometrics, 16, 147–185.

Belsley, D. A., E. Kuh, and R. E. Welsch [1980], Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, Wiley, New York.

Bendel, R. B. and A. A. Afifi [1974], “Comparison of stopping rules in forward stepwise regressions,” presented at the Joint Statistical Meeting, St. Louis, Mo.

Berk, K. N. [1978], “Comparing subset regression procedures,” Technometrics, 20, 1–6.

Berkson, J. [1950], “Are there two regressions?” J. Am. Stat. Assoc., 45, 164–180.

Berkson, J. [1969], “Estimation of a linear function for a calibration line; consideration of a recent proposal,” Technometrics, 11, 649–660.

Bishop, C. M. [1995], Neural Networks for Pattern Recognition, Clarendon Press, Oxford.

Bloomfield, P. and W. L. Steiger [1983], Least Absolute Deviations: Theory, Applications, and Algorithms, Birkhuser Verlag, Boston.

Book, D., J. Booker, H. O. Hartley, and R. I. Sielken, Jr. [1980], “Unbiased L1 estimators and their covariances,” ONR THEMIS Technical Report No. 64, Institute of Statistics, Texas A & M University.

Box, G. E. P. [1966], “Use and abuse of regression,” Technometrics, 8, 625–629.

Box, G. E. P. and D. W. Behnken [1960], “Some new three level designs for the study of quantitative variables,” Technometrics, 2, 455–475.

Box, G. E. P. and D. R. Cox [1964], “An analysis of transformations,” J. R. Stat. Soc. Ser. B, 26, 211–243.

Box, G. E. P. and N. R. Draper [1959], “A basis for the selection of a response surface design,” J. Am. Stat. Assoc., 54, 622–654.

Box, G. E. P. and N. R. Draper [1963], “The choice of a second-order rotatable design,” Biometrika, 50, 335–352.

Box, G. E. P. and N. R. Draper [1987], Empirical Model Building and Response Surfaces, Wiley, New York.

Box, G. E. P. and J. S. Hunter [1957], “Multifactor experimental designs for exploring response surfaces,” Ann. Math. Stat., 28, 195–242.

Box, G. E. P., W. G. Hunter, and J. S. Hunter [1978], Statistics for Experimenters, Wiley, New York.

Box, G. E. P., G. M. Jenkins, and G. C. Reinsel [1994], Time Series Analysis, Forecasting, and Control, 3rd ed., Prentice-Hall, Englewood Cliffs, N.J.

Box, G. E. P. and P. W. Tidwell [1962], “Transformation of the independent variables,” Technometrics, 4, 531–550.

Box, G. E. P. and J. M. Wetz [1973], “Criterion for judging the adequacy of estimation by an approximating response polynomial,” Technical Report No. 9, Department of Statistics, University of Wisconsin, Madison.

Bradley, R. A. and S. S. Srivastava [1979], “Correlation and polynomial regression,” Am. Stat., 33, 11–14.

Breiman, L., J. H. Friedman, R. A. Olshen, and C. J. Stone [1984], Classification and Regression Trees, Wadsworth, Belmont, Calif.

Brown, P. J. [1977], “Centering and scaling in ridge regression,” Technometrics, 19, 35–36.

Brown, R. L., J. Durbin, and J. M. Evans [1975], “Techniques for testing the constancy of regression relationships over time (with discussion),” J. R. Stat. Soc. Ser. B, 37, 149–192.

Buse, A. and L. Lim [1977], “Cubic splines as a special case of restricted least squares,” J. Am. Stat. Assoc., 72, 64–68.

Cady, F. B. and D. M. Allen [1972], “Combining experiments to predict future yield data,” Agron. J., 64, 211–214.

Carroll, R. J. and D. Ruppert [1985], “Transformation in regression: A robust analysis,” Technometrics, 27, 1–12.

Carroll, R. J. and D. Ruppert [1988], Transformation and Weighting in Regression, Chapman & Hall, London.

Chapman, R. E. [1997–98], “Degradation study of a photographic developer to determine shelf life,” Quality Engineering, 10, 137–140.

Chatterjee, S. and B. Price [1977], Regression Analysis by Example, Wiley, New York.

Coakley, C. W. and T. P. Hettmansperger [1993], “A bounded influence, high breakdown, efficient regression estimator,” J. Am. Stat. Assoc., 88, 872–880.

Cochrane, D. and G. H. Orcutt [1949], “Application of least squares regression to relationships containing autocorrelated error terms,” J. Am. Stat. Assoc., 44, 32–61.

Conniffe, D. and J. Stone [1973], “A critical view of ridge regression,” The Statistician, 22, 181–187.

Conniffe, D. and J. Stone [1975], “A reply to Smith and Goldstein,” The Statistician, 24, 67–68.

Cook, R. D. [1977], “Detection of influential observation in linear regression,” Technometrics, 19, 15–18.

Cook, R. D. [1979], “Influential observations in linear regression,” J. Am. Stat. Assoc., 74, 169–174.

Cook, R. D. [1993], “Exploring partial residual plots,” Technometrics, 35, 351–362.

Cook, R. D. and P. Prescott [1981], “On the accuracy of Bonferroni significance levels for detecting outliers in linear models,” Technometrics, 22, 59–63.

Cook, R. D. and S. Weisberg [1983], “Diagnostics for heteroscedasticity in regression,” Biometrika, 70, 1–10.

Cook, R. D. and S. Weisberg [1994]. An Introduction to Regression Graphics, Wiley, New York.

Cox, D. R. and E. J. Snell [1974], “The choice of variables in observational studies,” Appl. Stat., 23, 51–59.

Curry, H. B. and I. J. Schoenberg [1966], “On Polya frequency functions IV: The fundamental spline functions and their limits,” J. Anal. Math., 17, 71–107.

Daniel, C. [1976], Applications of Statistics to Industrial Experimentation, Wiley, New York.

Daniel, C. and F. S. Wood [1980], Fitting Equations to Data, 2nd ed., Wiley, New York.

Davies, R. B. and B. Hutton [1975], “The effects of errors in the independent variables in linear regression,” Biometrika, 62, 383–391.

Davison, A. C. and D. V. Hinkley [1997], Bootstrap Methods and Their Application, Cambridge University Press, London.

De Jong, P. J., T. De Wet, and A. H. Welsh [1988], “Mallows-type bounded-influence-regression trimmed means,” J. Am. Stat. Assoc., 83, 805–810.

DeLury, D. B. [1960], Values and Integrals of the Orthogonal Polynomials up to N = 26, University of Toronto Press, Toronto.

Dempster, A. P., M. Schatzoff, and N. Wermuth [1977], “A simulation study of alternatives to ordinary least squares,” J. Am. Stat. Assoc., 72, 77–90.

Denby, L. and W. A. Larson [1977], “Robust regression estimators compared via Monte Carlo,” Commun. Stat., A6, 335–362.

Dodge, Y. [1987], Statistical Data Analysis Based on the L1-Norm and Related Methods, North-Holland, Amsterdam.

Dolby, G. R. [1976], “The ultrastructural relation: A synthesis of the functional and structural relations,” Biometrika, 63, 39–50.

Dolby, J. L. [1963], “A quick method for choosing a transformation,” Technometrics, 5, 317–325.

Draper, N. R., J. Guttman, and H. Kanemasa [1971], “The distribution of certain regression statistics,” Biometrika, 58, 295–298.

Draper, N. R. and H. Smith [1998], Applied Regression Analysis, 3rd ed., Wiley, New York.

Draper, N. R. and R. C. Van Nostrand [1977a], “Shrinkage estimators: Review and comments,” Technical Report No. 500, Department of Statistics, University of Wisconsin, Madison.

Draper, N. R. and R. C. Van Nostrand [1977b], “Ridge regression: Is it worthwhile?” Technical Report No. 501, Department of Statistics, University of Wisconsin, Madison.

Draper, N. R. and R. C. Van Nostrand [1979], “Ridge regression and James–Stein estimators: Review and comments,” Technometrics, 21, 451–466.

Durbin, J. [1970], “Testing for serial correlation in least squares regression when some of the regressors are lagged dependent variables,” Econometrica, 38, 410–421.

Durbin, J. and G. S. Watson [1950], “Testing for serial correlation in least squares regression I,” Biometrika, 37, 409–438.

Durbin, J. and G. S. Watson [1951], “Testing for serial correlation in least squares regression II,” Biometrika, 38, 159–178.

Durbin, J. and G. S. Watson [1971], “Testing for serial correlation in least squares regression III,” Biometrika, 58, 1–19.

Dutter, R. [1977], “Numerical solution of robust regression problems: Computational aspects, a comparison,” J. Stat. Comput. Simul., 5, 207–238.

Dutter, R. and P. J. Hober [1981], “Numerical methods for the robust nonlinear regression problem,” J. Stat. Comput. Simul., 13, 79–114.

Dykstra, O., Jr. [1971], “The augmentation of experimental data to maximize (X′X),” Technometrics, 13, 682–688.

Edwards, J. B. [1969], “The relation between the F-test and R2,” Am. Stat., 23, 28.

Efron, B. [1979], “Bootstrap methods: Another look at the jackknife,” Ann. Stat., 7, 1–26.

Efron, B. [1982], The Jackknife, the Bootstrap and Other Resampling Plans, Society for Industrial and Applied Mathematics, Philadelphia.

Efron, B. [1987], “Better bootstrap confidence intervals (with discussion),” J. Am. Stat. Assoc., 82, 172–200.

Efron, B. and R. Tibshirani [1986], “Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy,” Stat. Sci., 1, 54–77.

Efron, B. and R. Tibshirani [1993], An Introduction to the Bootstrap, Chapman & Hall, London.

Efroymson, M. A. [1960], “Multiple regression analysis,” in A. Ralston and H. S. Wilf (Eds.), Mathematical Methods for Digital Computers, Wiley, New York.

Ellerton, R. R. W. [1978], “Is the regression equation adequate—A generalization,” Technometrics, 20, 313–316.

Eubank, R. L. [1988], Spline Smoothing and Nonparametric Regression, Dekker, New York.

Eubank, R. L. and P. Speckman [1990], “Curve fitting by polynomial–trigonometric regression,” Biometrika, 77, 1–9.

Everitt, B. S. [1993], Cluster Analysis, 3rd ed., Halsted Press, New York.

Farrar, D. E. and R. R. Glauber [1967], “Multicollinearity in regression analysis: The problem revisited,” Rev. Econ. Stat., 49, 92–107.

Feder, P. I. [1974], “Graphical techniques in statistical data analysis—Tools for extracting information from data,” Technometrics, 16, 287–299.

Forsythe, A. B. [1972], “Robust estimation of straight-line regression coefficients by minimizing pth power deviations,” Technometrics, 14, 159–166.

Forsythe, G. E. [1957], “Generation and use of orthogonal polynomials for data-fitting with a digital computer,” J. Soc. Ind. Appl. Math., 5, 74–87.

Fuller, W. A. [1976], Introduction to Statistical Time Series, Wiley, New York.

Furnival, G. M. [1971], “All possible regressions with less computation,” Technometrics, 13, 403–408.

Furnival, G. M. and R. W. M. Wilson, Jr. [1974], “Regression by leaps and bounds,” Technometrics, 16, 499–511.

Gallant, A. R. and W. A. Fuller [1973], “Fitting segmented polynomial regression models whose join points have to be estimated,” J. Am. Stat. Assoc., 63, 144–147.

Garside, M. J. [1965], “The best subset in multiple regression analysis,” Appl. Stat., 14, 196–200.

Gartside, P. S. [1972], “A study of methods for comparing several variances,” J. Am. Stat. Assoc., 67, 342–346.

Gaylor, D. W. and J. A. Merrill [1968], “Augmenting existing data in multiple regression,” Technometrics, 10, 73–81.

Geisser, S. [1975], “The predictive sample reuse method with applications,” J. Am. Stat. Assoc., 70, 320–328.

Gentle, J. M., W. J. Kennedy, and V. A. Sposito [1977], “On least absolute deviations estimators,” Commun. Stat., A6, 839–845.

Gibbons, D. G. [1979], A Simulation Study of Some Ridge Estimators, General Motors Research Laboratories, Mathematics Department, GMR-2659 (rev. ed.), Warren, Mich.

Gnanadesikan, R. [1977], Methods for Statistical Analysis of Multivariate Data, Wiley, New York.

Goldberger, A. S. [1964], Econometric Theory, Wiley, New York.

Goldstein, M. and A. F. M. Smith [1974], “Ridge-type estimators for regression analysis,” J. R. Stat. Soc. Ser. B, 36, 284–291.

Golub, G. H. [1969], “Matrix decompositions and statistical calculations,” in R. C. Milton and J. A. Welder (Eds.), Statistical Computation, Academic, New York.

Gorman, J. W. and R. J. Toman [1966], “Selection of variables for fitting equations to data.” Technometrics, 8, 27–51.

Graybill, F. A. [1961], An Introduction to Linear Statistical Models, Vol. 1, McGraw-Hill, New York.

Graybill, F. A. [1976], Theory and Application of the Linear Model, Duxbury, North Scituate, Mass.

Guilkey, D. K. and J. L. Murphy [1975], “Directed ridge regression techniques in cases of multicollinearity,” J. Am. Stat. Assoc., 70, 769–775.

Gupta, A. and A. K. Das [2000], “Improving resistivity of UF resin through setting of process parameters,” Quality Engineering, 12, 611–618.

Gunst, R. F. [1979], “Similarities among least squares, principal component, and latent root regression estimators,” presented at the Washington, D.C., Joint Statistical Meetings.

Gunst, R. F. and R. L. Mason [1977], “Biased estimation in regression: An evaluation using mean squared error,” J. Am. Stat. Assoc., 72, 616–628.

Gunst, R. F. and R. L. Mason [1979], “Some considerations in the evaluation of alternative prediction equations,” Technometrics, 21, 55–63.

Gunst, R. F., J. T. Webster, and R. L. Mason [1976], “A comparison of least squares and latent root regression estimators,” Technometrics, 18, 75–83.

Gunter, B. [1997a], “Tree-based classification and regression. Part I: Background and fundamentals,” Qual. Prog., 28, August, 159–163.

Gunter, B. [1997b], “Tree-based classification and regression. Part II: Assessing classification performance,” Qual. Prog., 28, December, 83–84.

Gunter, B. [1998], “Tree-based classification and regression. Part III: Tree-based procedures,” Qual. Prog., 31, February, 121.

Hadi, A S. and J. S. Simonoff [1993], “Procedures for the identification of multiple outliers in linear models,” J. Am. Stat. Assoc., 88, 1264–1272.

Hahn, G. J. [1972], “Simultaneous prediction intervals for a regression model,” Technometrics, 14, 203–214.

Hahn, G. J. [1973], “The coefficient of determination exposed!” Chem. Technol., 3, 609–614.

Hahn, G. J. [1979], “Fitting regression models with no intercept term,” J. Qual. Technol., 9(2), 56–61.

Hahn, G. J. and R. W. Hendrickson [1971], “A table of percentage points of the largest absolute value of k student t variates and its applications,” Biometrika, 58, 323–332.

Haitovski, Y. [1969], “A note on the maximization of R2,” Am. Stat., 23(1), 20–21.

Hald, A. [1952], Statistical Theory with Engineering Applications, Wiley, New York.

Halperin, M. [1961], “Fitting of straight lines and prediction when both variables are subject to error,” J. Am. Stat. Assoc., 56, 657–669.

Halperin, M. [1970], “On inverse estimation in linear regression,” Technometrics, 12, 727–736.

Hawkins, D. M. [1973], “On the investigation of alternative regressions by principal components analysis,” Appl. Stat., 22, 275–286.

Hawkins, D. M. [1994], “The feasible solution algorithm for least trimmed squares regression,” Comput. Stat. Data Anal., 17, 185–196.

Hawkins, D. M., D. Bradu, and G. V. Kass [1984], “Location of several outliers in multiple regression using elemental sets,” Technometrics, 26, 197–208.

Hayes, J. G. (Ed.) [1970], Numerical Approximations to Functions and Data, Athlone Press, London.

Hayes, J. G. [1974], “Numerical methods for curve and surface fitting,” J. Inst. Math. Appl., 10, 144–152.

Haykin, S. [1994], Neural Networks: A Comprehensive Foundation, Macmillan Co., New York.

Hemmerle, W. J. and T. F. Brantle [1978], “Explicit and constrained generalized ridge regression,” Technometrics, 20, 109–120.

Hettmansperger, T. P. and J. W. Mckean [1998], Robust Nonparametric Statistical Methods, Vol. 5 of Kendall's Library of Statistics, Arnold, London.

Hill, R. C., G. G. Judge, and T. B. Fomby [1978], “On testing the adequacy of a regression model,” Technometrics, 20, 491–494.

Hill, R. W. [1979], “On estimating the covariance matrix of robust regression M-estimates,” Commun. Stat., A8, 1183–1196.

Himmelblau, D. M. [1970], Process Analysis by Statistical Methods, Wiley, New York.

Hoadley, B. [1970], “A Bayesian look at inverse linear regression,” J. Am. Stat. Assoc., 65, 356–369.

Hoaglin, D. C. and R. E. Welsch [1978], “The hat matrix in regression and ANOVA,” Am. Stat., 32(1), 17–22.

Hocking, R. R. [1972], “Criteria for selection of a subset regression: Which one should be used,” Technometrics, 14, 967–970.

Hocking, R. R. [1974], “Misspecification in regression,” Am. Stat., 28, 39–40.

Hocking, R. R. [1976], “The analysis and selection of variables in linear regression,” Biometrics, 32, 1–49.

Hocking, R. R. and L. R. LaMotte [1973], “Using the SELECT program for choosing subset regressions,” in W. O. Thompson and F. B. Cady (Eds.), Proceedings of the University of Kentucky Conference on Regression with a Large Number of Predictor Variables, Department of Statistics, University of Kentucky, Lexington.

Hocking, R. R., F. M. Speed, and M. J. Lynn [1976], “A class of biased estimators in linear regression,” Technometrics, 18, 425–437.

Hodges, S. D. and P. G. Moore [1972], “Data uncertainties and least squares regression,” Appl. Stat., 21, 185–195.

Hoerl, A. E. [1959], “Optimum solution of many variable equations,” Chem. Eng. Prog., 55, 69.

Hoerl, A. E. and R. W. Kennard [1970a], “Ridge regression: Biased estimation for nonorthogonal problems,” Technometrics, 12, 55–67.

Hoerl, A. E. and R. W. Kennard [1970b], “Ridge regression: Applications to nonorthogonal problems,” Technometrics, 12, 69–82.

Hoerl, A. E. and R. W. Kennard [1976], “Ridge regression: Iterative estimation of the biasing parameter,” Commun. Stat., A5, 77–88.

Hoerl, A. E., R. W. Kennard, and K. F. Baldwin [1975], “Ridge regression: Some simulations,” Commun. Stat., 4, 105–123.

Hogg, R. V. [1974], “Adaptive robust procedures: A partial review and some suggestions for future applications and theory,” J. Am. Stat. Assoc., 69, 909–925.

Hogg, R. V. [1979a], “Statistical robustuess: One view of its use in applications today,” Am. Stat., 33(3), 108–115.

Hogg, R. V. [1979b], “An introduction to robust estimation,” in R. L. Launer and G. N. Wilkinson (Eds.), Robustness in Statistics, Academic, New York, pp. 1–18.

Hogg, R. V. and R. H. Randles [1975], “Adaptive distribution-free regression methods and their applications,” Technometrics, 17, 399–407.

Holland, P. W. and R. E. Welsch [1977], “Robust regression using iteratively reweighted least squares,” Commun. Stat., A6, 813–828.

Huber, P. J. [1964], “Robust estimation of a location parameter,” Ann. Math. Stat., 35, 73–101.

Huber, P. J. [1972], “Robust statistics: A review,” Ann. Math. Stat., 43, 1041–1067.

Huber, P. J. [1973], “Robust regression: Asymptotics, conjectures, and Monte Carlo,” Ann. Stat., 1, 799–821.

Huber, P. J. [1981], Robust Statistics, Wiley, New York.

Jaeckel, L. A. [1972], “Estimating regression coefficients by minimizing the dispersion of the residuals,” Ann. Math. Stat., 43, 1449–1458.

Joglekar, G., J. H. Schuenemeyer, and V. LaRiccia [1989], “Lack-of-fit testing when replicates are not available,” Am. Stat., 43, 135–143.

Johnson, R. A. and D. W. Wichern [1992], Applied Multivariate Statistical Analysis, Prentice-Hall, Englewood Cliffs, N.J.

Johnston, J. [1972], Econometric Methods, McGraw-Hill, New York.

Jurečková, J. [1977], “Asymptotic relations of M-estimates and R-estimates in linear regression models,” Ann. Stat., 5, 464–472.

Kalotay, A. J. [1971], “Structural solution to the linear calibration problem,” Technometrics, 13, 761–769.

Kendall, M. G. and G. U. Yule [1950], An Introduction to the Theory of Statistics, Charles Griffin, London.

Kennard, R. W. and L. Stone [1969], “Computer aided design of experiments,” Technometrics, 11, 137–148.

Kennedy, W. J. and T. A. Bancroft [1971], “Model-building for prediction in regression using repeated significance tests,” Ann. Math. Stat., 42, 1273–1284.

Khuri, A. H. and J. A. Cornell [1996], Response Surfaces: Designs and Analyses, 2nd ed., Dekker, New York.

Kiefer, J. [1959], “Optimum experimental designs,” Journal of the Royal Statistical Society, Series B, 21, 272–304.

Kiefer, J. [1961], “Optimum designs in regression problems. II,” Annals of Mathematical Statistics, 32, 298–325.

Kiefer, J. and J. Wolfowitz [1959], “Optimum designs in regression problems,” Annals of Mathematical Statistics, 30, 271–294.

Krasker, W. S. and R. E. Welsch [1982], “Efficient bounded-influence regression estimation,” J. Am. Stat. Assoc., 77, 595–604.

Krutchkoff, R. G. [1967], “Classical and inverse regression methods of calibration,” Technometrics, 9, 425–439.

Krutchkoff, R. G. [1969], “Classical and inverse regression methods of calibration in extrapolation,” Technometrics, 11, 605–608.

Kunugi, T., T. Tamura, and T. Naito [1961], “New acetylene process uses hydrogen dilution,” Chem. Eng. Prog., 57, 43–49.

Land, C. E. [1974], “Confidence interval estimation for means after data transformation to normality,” J. Am. Stat. Assoc., 69, 795–802 (Correction, ibid., 71, 255).

Larsen, W. A. and S. J. McCleary [1972], “The use of partial residual plots in regression analysis,” Technometrics, 14, 781–790.

Lawless, J. F. [1978], “Ridge and related estimation procedures: Theory and practice,” Commun. Stat., A7, 139–164.

Lawless, J. F. and P. Wang [1976], “A simulation of ridge and other regression estimators,” Commun. Stat., A5, 307–323.

Lawrence, K. D. and J. L. Arthur [1990], “Robust nonlinear regression,” in K. D. Lawrence and J. L. Arthur (Eds.), Robust Regression: Analysis and Applications, Dekker, New York, pp. 59–86.

Lawson, C. R. and R. J. Hanson [1974], Solving Least Squares Problems, Prentice-Hall, Englewood Cliffs, N.J.

Leamer, E. E. [1973], “Multicollinearity: A Bayesian interpretation,” Rev. Econ. Stat., 55, 371–380.

Leamer, E. E. [1978], Specification Searches: Ad Hoc Inference with Nonexperimental Data, Wiley, New York.

Levine, H. [1960], “Robust tests for equality of variances,” in I. Olkin (Ed.), Contributions to Probability and Statistics, Stanford University Press, Palo Alto, Calif., pp. 278–292.

Lieberman, G. J., R. G. Miller, Jr., and M. A. Hamilton [1967], “Unlimited simultaneous discrimination intervals in regression,” Biometrika, 54, 133–145.

Lindley, D. V. [1974], “Regression lines and the linear functional relationship,” J. R. Stat. Soc. Suppl., 9, 218–244.

Lindley, D. V. and A. F. M. Smith [1972], “Bayes estimates for the linear model (with discussion),” J. R. Stat. Soc. Ser. B, 34, 1–41.

Looney, S. W. and T. R. Gulledge, Jr. [1985], “Use of the correlation coefficient with normal probability plots,” Am. Stat., 35, 75–79.

Lowerre, J. M. [1974], “On the mean square error of parameter estimates for some biased estimators,” Technometrics, 16, 461–464.

McCarthy, P. J. [1976], “The use of balanced half-sample replication in cross-validation studies,” J. Am. Stat. Assoc., 71, 596–604.

McCullagh, P. and J. A. Nelder [1989], Generalized Linear Models, 2nd ed., Chapman & Hall, London.

McDonald, G. C. and J. A. Ayers [1978], “Some applications of ‘Chernuff faces’: A technique for graphically representing multivariate data,” in Graphical Representation of Multivariate Data, Academic Press, New York.

McDonald, G. C. and D. I. Galarneau [1975], “A Monte Carlo evaluation of some ridge-type estimators,” J. Am. Stat. Assoc., 70, 407–416.

Mallows, C. L. [1964], “Choosing variables in a linear regression: A graphical aid,” presented at the Central Regional Meeting of the Institute of Mathematical Statistics, Manhattan, Kans.

Mallows, C. L. [1966], “Choosing a subset regression,” presented at the Joint Statistical Meetings, Los Angeles.

Mallows, C. L. [1973], “Some comments on Cp,” Technometrics, 15, 661–675.

Mallows, C. L. [1986], “Augmented partial residuals,” Technometrics, 28, 313–319.

Mallows, C. L. [1995], “More comments on Cp,” Technometrics, 37, 362–372. (Also see [1997], 39, 115–116.)

Mandansky, A. [1959], “The fitting of straight lines when both variables are subject to error,” J. Am. Stat. Assoc., 54, 173–205.

Mansfield, E. R. and M. D. Conerly [1987], “Diagnostic value of residual and partial residual plots,” Am. Stat., 41, 107–116.

Mansfield, E. R., J. T. Webster, and R. F. Gunst [1977], “An analytic variable selection procedure for principal component regression,” Appl. Stat., 26, 34–40.

Mantel, N. [1970], “Why stepdown procedures in variable selection,” Technometrics, 12, 621–625.

Marazzi, A. [1993], Algorithms, Routines and S Functions for Robust Statistics, Wadsworth and Brooks/Cole, Pacific Grove, Calif.

Maronna, R. A. [1976], “Robust M-estimators of multivariate location and scatter,” Ann. Stat., 4, 51–67.

Marquardt, D. W. [1963], “An algorithm for least squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math., 2, 431–441.

Marquardt, D. W. [1970], “Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation,” Technometrics, 12, 591–612.

Marquardt, D. W. and R. D. Snee [1975], “Ridge regression in practice,” Am. Stat., 29(1), 3–20.

Mason, R. L., R. F. Gunst, and J. T. Webster [1975], “Regression analysis and problems of multicollinearity,” Commun. Stat., 4(3), 277–292.

Mayer, L. S. and T. A. Willke [1973], “On biased estimation in linear models,” Technometrics, 16, 494–508.

Meyer, R. K. and C. J. Nachtsheim [1995], “The coordinate exchange algorithm for constructing exact optimal designs,” Technometrics, 37, 60–69.

Miller, D. M. [1984], “Reducing transformation bias in curve fitting,” Am. Stat., 38, 124–126.

Miller, R. G., Jr. [1966], Simultaneous Statistical Inference, McGraw-Hill, New York.

Montgomery, D. C. [2009], Design and Analysis of Experiments, 7th ed., Wiley, New York.

Montgomery, D. C., L. A. Johnson, and J. S. Gardiner [1990], Forecasting and Time Series Analysis, 2nd ed., McGraw-Hill, New York.

Montgomery, D. C., C. L. Jennings, and M. Kulahci [2008], Introduction to Time Series Analysis and Forecasting, Wiley, Hoboken, N.J.

Montgomery, D. C., E. W. Martin, and E. A. Peck [1980], “Interior analysis of the observations in multiple linear regression,” J. Qual. Technol., 12(3), 165–173.

Morgan, J. A. and J. F. Tatar [1972], “Calculation of the residual sum of squares for all possible regressions,” Technometrics, 14, 317–325.

Mosteller, F. and J. W. Tukey [1968], “Data analysis including statistics,” in G. Lindzey and E. Aronson (Eds.), Handbook of Social Psychology, Vol. 2, Addison-Wesley, Reading, Mass.

Mosteller, F. and J. W. Tukey [1977], Data Analysis and Regression: A Second Course in Statistics, Addison-Wesley, Reading, Mass.

Moussa-Hamouda, E. and F. C. Leone [1974], “The 0-blue estimators for complete and censored samples in linear regression,” Technometrics, 16, 441–446.

Moussa-Hamouda, E. and F. C. Leone [1977a], “The robustness of efficiency of adjusted trimmed estimators in linear regression,” Technometrics, 19, 19–34.

Moussa-Hamouda, E. and F. C. Leone [1977b], “Efficiency of ordinary least squares from trimmed and Winsorized samples in linear regression,” Technometrics, 19, 265–273.

Mullet, G. M. [1976], “Why regression coefficients have the wrong sign,” J. Qual. Technol., 8, 121–126.

Myers, R. H. [1990], Classical and Modern Regression with Applications, 2nd ed., PWS-Kent Publishers, Boston.

Myers, R. H. and D. C. Montgomery [1997], “A tutorial on generalized linear models,” Journal of Quality Technology, 29, 274–291.

Myers, R. H., D. C. Montgomery, and C. M. Anderson-Cook [2009], Response Surface Methodology: Process and Product Optimization Using Designed Experiments, 3rd ed., Wiley, New York.

Myers, R. H., D. C. Montgomery, G. G. Vining, and T. J. Robinson [2010], Generalized Linear Models with Applications in Engineering and the Sciences, Wiley, Hoboken, NJ.

Narula, S. and J. S. Ramberg [1972], Letter to the Editor, Am. Stat., 26, 42.

Naszódi, L. J. [1978], “Elimination of the bias in the course of calibration,” Technometrics, 20, 201–205.

Nelder, J. A. and R. W. M. Wedderburn [1972], “Generalized linear models,” J. R. Stat. Soc. Ser. A, 153, 370–384;

Neter, J., M. H. Kuther, C. J. Nachtsheim, and W. Wasserman [1996], Applied Linear Statistical Models, 4th ed., Richard D. Irwin, Homewood, Ill.

Neyman, J. and E. L. Scott [1960], “Correction for bias introduced by a transformation of variables,” Ann. Math. Stat., 31, 643–655.

Obenchain, R. L. [1975], “Ridge analysis following a preliminary test of the shrunken hypothesis,” Technometrics, 17, 431–441.

Obenchain, R. L. [1977], “Classical F-tests and confidence intervals for ridge regression,” Technometrics, 19, 429–439.

Ott, R. L. and R. H. Myers [1968], “Optimal experimental designs for estimating the independent variable in regression,” Technometrics, 10, 811–823.

Parker, P. A., G. G. Vining, S. A. Wilson, J. L. Szarka, III, and N. G. Johnson [2010], “Prediction properties of classical and inverse regression for the simple linear calibration problem,” J. Qual. Technol., 42, 332–347.

Pearson, E. S. and H. O. Hartley [1966], Biometrika Tables for Statisticians, Vol. 1, 3rd ed., Cambridge University Press, London.

Peixoto, J. L. [1987], “Hierarchical variable selection in polynomial regression models,” Am. Stat., 41, 311–313.

Peixoto, J. L. [1990], “A property of well-formulated polynomial regression models,” Am. Stat., 44, 26–30. (Also see [1991], 45, 82.)

Pena, D. and V. J. Yohai [1995], “The detection of influential subsets in linear regression by using an influence matrix,” J. R. Stat. Soc. Ser. B, 57, 145–156.

Perng, S. K. and Y. L. Tong [1974], “A sequential solution to the inverse linear regression problem,” Ann. Stat., 2, 535–539.

Pesaran, M. H. and L. J. Slater [1980], Dynamic Regression: Theory Algorithms, Halsted Press, New York.

Pfaffenberger, R. C. and T. E. Dielman [1985], “A comparison of robust ridge estimators,” in Business and Economics Section Proceedings of the American Statistical Association, pp. 631–635.

Poirier, D. J. [1973], “Piecewise regression using cubic splines,” J. Am. Stat. Assoc., 68, 515–524.

Poirier, D. J. [1975], “On the use of bilinear splines in economics,” J. Econ., 3, 23–24.

Pope, P. T. and J. T. Webster [1972], “The use of an F-statistic in stepwise regression procedures,” Technometrics, 14, 327–340.

Pukelsheim, F. [1995], Optimum Design of Experiments, Chapman & Hall, London.

Ramsay, J. O. [1977], “A comparative study of several robust estimates of slope, intercept, and scale in linear regression,” J. Am. Stat. Assoc., 72, 608–615.

Rao, P. [1971], “Some notes on misspecification in regression,” Am. Stat., 25, 37–39.

Ripley, B. D. [1994], “Statistical ideas for selecting network architectures,” in B. Kappen and S. Grielen (Eds.), Neural Networks: Artificial Intelligence Industrial Applications, Springer-Verlag, Berlin, pp. 183–190.

Rocke, D. M. and D. L. Woodruff [1996], “Identification of outliers in multivariate data,” J. Am. Stat. Assoc., 91, 1047–1061.

Rosenberg, S. H. and P. S. Levy [1972], “A characterization on misspecification in the general linear regression model,” Biometrics, 28, 1129–1132.

Rossman, A. J. [1994]. “Televisions, physicians and life expectancy,” J. Stat. Educ., 2.

Rousseeuw, P. J. [1984], “Least median of squares regression,” J. Am. Stat. Assoc., 79, 871–880.

Rousseeuw, P. J. [1998], “Robust estimation and identifying outliers,” in H. M. Wadsworth (Ed.), Handbook of Statistical Methods for Engineers and Scientists, McGraw–Hill, New York, Chapter 17.

Rousseeuw, P. J. and A. M. Leroy [1987], Robust Regression and Outlier Detection, Wiley, New York.

Rousseeuw, P. J. and B. L. van Zomeren [1990], “Unmasking multivariate outliers and leverage points,” J. Am. Stat. Assoc., 85, 633–651.

Rousseeuw, P. J., and V. Yohai [1984], “Robust regression by means of S-estimators,” in J. Franke, W. Härdle, and R. D. Martin (Eds.), Robust Nonlinear Time Series Analysis: Lecture Notes in Statistics, Vol. 26, Springer, Berlin, pp. 256–272.

Ryan, T. P. [1997], Modern Regression Methods, Wiley, New York.

SAS Institute [1987], SAS Views: SAS Principles of Regression Analysis, SAS Institute, Cary, N.C.

Sawa, T. [1978], “Information criteria for discriminating among alternative regression models,” Econometrica, 46, 1273–1282.

Schatzoff, M., R. Tsao, and S. Fienberg [1968], “Efficient calculation of all possible regressions,” Technometrics, 10, 769–779.

Scheffé, H. [1953], “A method for judging all contrasts in the analysis of variance,” Ann. Math. Stat., 40, 87–104.

Scheffé, H. [1959], The Analysis of Variance, Wiley, New York.

Scheffé, H. [1973], “A statistical theory of calibration,” Ann. Stat., 1, 1–37.

Schilling, E. G. [1974a], “The relationship of analysis of variance to regression. Part I. Balanced designs,” J. Qual. Technol., 6, 74–83.

Schilling, E. G. [1974b], “The relationship of analysis of variance to regression. Part II. Unbalanced designs,” J. Qual. Technol., 6, 146–153.

Sclove, S. L. [1968], “Improved estimators for coefficients in linear regression,” J. Am. Stat. Assoc., 63, 596–606.

Searle, S. R. [1971], Linear Models, Wiley, New York.

Searle, S. R. and J. G. Udell [1970], “The use of regression on dummy variables in market research,” Manage. Sci. B, 16, 397–409.

Seber, G. A. F. [1977], Linear Regression Analysis, Wiley, New York.

Sebert, D. M., D. C. Montgomery, and D. A. Rollier [1998], “A clustering algorithm for identifying multiple outliers in linear regression,” Comput. Stat. Data Anal., 27, 461–484.

Sielken, R. L., Jr. and H. O. Hartley [1973], “Two linear programming algorithms for unbiased estimation of linear models,” J. Am. Stat. Assoc., 68, 639–641.

Silvey, S. D. [1969], “Multicollinearity and imprecise estimation,” J. R. Stat. Soc. Ser. B, 31, 539–552.

Simpson, D. G., D. Ruppert, and R. J. Carroll [1992], “On one-step GM-estimates and stability of inference in linear regression,” J. Am. Stat. Assoc., 87, 439–450.

Simpson, J. R. and D. C. Montgomery [1996], “A biased-robust regression technique for the combined-outlier multicollinearity problem,” J. Stat. Comput. Simul., 56, 1–22.

Simpson, J. R. and D. C. Montgomery [1998a], “A robust regression technique using compound estimation,” Nov. Res. Logistics, 45, 125–139.

Simpson, J. R. and D. C. Montgomery [1998b], “The development and evaluation of alternative generalized M-estimation techniques,” Commun. Stat. Simul. Comput., 27, 999–1018.

Simpson, J. R. and D. C. Montgomery [1998c], “A performance-based assessment of robust regression methods,” Commun. Stat. Simul. Comput., 27, 1031–1099.

Smith, A. F. M. and M. Goldstein [1975], “Ridge regression: Some comments on a paper of Conniffe and Stone,” The Statistician, 24, 61–66.

Smith, B. T., J. M. Boyle, B. S. Garbow, Y. Ikebe, V. C. Klema, and C. B. Moler [1974], Matrix Eigensystem Routines, Springer-Verlag, Berlin.

Smith, G. and F. Campbell [1980], “A critique of some ridge regression methods (with discussion),” J. Am. Stat. Assoc., 75, 74–103.

Smith, J. H. [1972], “Families of transformations for use in regression analysis,” Am. Stat., 26(3), 59–61.

Smith, P. L. [1979], “Splines as a useful and convenient statistical tool,” Am. Stat., 33(2), 57–62.

Smith, R. C. et al. [1992], “Ozone depletion: Ultraviolet radiation and phytoplankton biology in Antartic waters,” Science, 255, 952–957.

Snee, R. D. [1973], “Some aspects of nonorthogonal data analysis, Part I. Developing prediction equations,” J. Qual. Technol., 5, 67–79.

Snee, R. D. [1977], “Validation of regression models: Methods and examples,” Technometrics, 19, 415–428.

Sprent, P. [1969], Models in Regression and Related Topics, Methuen, London.

Sprent, P. and G. R. Dolby [1980], “The geometric mean functional relationship,” Biometrics, 36, 547–550.

Staudte, R. G. and S. J. Sheather [1990], Robust Estimation and Testing, Wiley, New York.

Stefanski, W. [1991], “A note on high-breakdown estimators,” Stat. Probab. Lett., 11, 353–358.

Stefansky, W. [1971], “Rejecting outliers by maximum normed residual,” Ann. Math. Stat., 42, 35–45.

Stefansky, W. [1972], “Rejecting outliers in factorial designs,” Technometrics, 14, 469–479.

Stein, C. [1960], “Multiple regression,” in I. Olkin (Ed.), Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, Stanford Press, Stanford, Calif.

Stewart, G. W. [1973], Introduction to Matrix Computations, Academic, New York.

Stone, M. [1974], “Cross-validating choice and assessment of statistical predictions (with discussion),” J. R. Stat. Soc. Ser. B, 36, 111–147.

Stromberg, A. J. [1993], “Computation of high breakdown nonlinear regression parameters,” J. Am. Stat. Assoc., 88, 237–244.

Stromberg, A. J. and D. Ruppert [1989], “Breakdown in nonlinear regression,” J. Am. Stat. Assoc., 83, 991–997.

Suich, R. and G. C. Derringer [1977], “Is the regression equation adequate—one criterion,” Technometrics, 19, 213–216.

Schwartz, G. [1978], “Estimating the dimension of a model,” Ann. Stat., 6, 461–464.

Theobald, C. M. [1974], “Generalizations of mean square error applied to ridge regression,” J. R. Stat. Soc. Ser. B, 36, 103–106.

Thompson, M. L. [1978a], “Selection of variables in multiple regression: Part I. A review and evaluation,” Int. Stat. Rev., 46, 1–19.

Thompson, M. L. [1978b], “Selection of variables in multiple regression: Part II. Chosen procedures, computations and examples,” Int. Stat. Rev., 46, 129–146.

Tucker, W. T. [1980], “The linear calibration problem revisited,” presented at the ASQC Fall Technical Conference, Cincinnati, Ohio.

Tufte, E. R. [1974], Data Analysis for Politics and Policy, Prentice-Hall, Englewood Cliffs, N.J.

Tukey, J. W. [1957], “On the comparative anatomy of transformations,” Ann. Math. Stat., 28, 602–632.

Wagner, H. M. [1959], “Linear programming techniques for regression analysis,” J. Am. Stat. Assoc., 54, 206–212.

Wahba, G., G. H. Golub, and C. G. Health [1979], “Generalized cross-validation as a method for choosing a good ridge parameter,” Technometrics, 21, 215–223.

Walker, E. [1984], Influence, Collinearity, and Robust Estimation in Regression, Ph.D. dissertation, Department of Statistics, Virginia Tech, Blacksburg.

Walker, E. and J. B. Birch [1988], “Influence measures in ridge regression,” Technometrics, 30, 221–227.

Walls, R. E. and D. L. Weeks [1969], “A note on the variance of a predicted response in regression,” Am. Stat., 23, 24–26.

Webster, J. T., R. F. Gunst, and R. L. Mason [1974], “Latent root regression analysis,” Technometrics, 16, 513–522.

Weisberg, S. [1985], Applied Linear Regression, 2nd ed., Wiley, New York.

Welsch, R. E. [1975], “Confidence regions for robust regression,” in Statistical Computing Section Proceedings of the American Statistical Association, Washington, D.C.

Welsch, R. E. and S. C. Peters [1978], “Finding influential subsets of data in regression models,” in A. R. Gallant and T. M. Gerig (Eds.), Proceedings of the Eleventh Interface Symposium on Computer Science and Statistics, Institute of Statistics, North Carolina State University, pp. 240–244.

White, J. W. and R. F. Gunst [1979], “Latent root regression: Large sample analysis,” Technometrics, 21, 481–488.

Wichern, D. W., and G. A. Churchill [1978], “A comparison of ridge estimators,” Technometrics, 20, 301–311.

Wilcox, R. R. [1997], Introduction to Robust Estimation and Hypothesis Testing, Academic, New York.

Wilde, D. J. and C. S. Beightler [1967], Foundations of Optimization, Prentice-Hall, Englewood Cliffs, N.J.

Wilkinson, J. W. [1965], The Algebraic Eigenvalue Problem, Oxford University Press, London.

Willan, A. R. and D. G. Watts [1978], “Meaningful multicollinearity measures,” Technometrics, 20, 407–412.

Williams, D. A. [1973], Letter to the Editor, Appl. Stat., 22, 407–408.

Williams, E. J. [1969], “A note on regression methods in calibration,” Technometrics, 11, 189–192.

Wold, S. [1974], “Spline functions in data analysis,” Technometrics, 16, 1–11.

Wonnacott, R. J. and T. H. Wonnacott [1970], Econometrics, Wiley, New York.

Wood, F. S. [1973], “The use of individual effects and residuals in fitting equations to data,” Technometrics, 15, 677–695.

Working, H. and H. Hotelling [1929], “Application of the theory of error to the interpretation of trends,” J. Am. Stat. Assoc. Suppl. (Proc.), 24, 73–85.

Wu, C. F. J. [1986], “Jackknife, bootstrap, and other resampling methods in regression analysis (with discussion),” Ann. Stat., 14, 1261–1350.

Yale, C. and A. B. Forsythe [1976], “Winsorized regression,” Technometrics, 18, 291–300.

Yohai, V. J. [1987], “High breakdown-point and high efficiency robust estimates for regression,” Ann. Stat., 15, 642–656.

Younger, M. S. [1979], A Handbook for Linear Regression, Duxbury Press, North Scituate, Mass.

Zellner, A. [1971], An Introduction to Bayesian Inference in Econometrics, Wiley, New York.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset