Numerical optimization

This section briefly introduces the different optimization algorithms that can be applied to minimize the loss function, with or without a penalty term. These algorithms are described in more detail in the Summary of optimization techniques section in the Appendix A, Basic Concepts.

First, let's define the least squares problem. The minimization of the loss function consists of nullifying the first order derivatives, which in turn generates a system of D equations (also known as the gradient equations), D being the number of regression weights (parameters). The weights are iteratively computed by solving the system of equations using a numerical optimization algorithm.

Note

M10: The definition of the least squares-based loss function for residual ri, weights w, a model f, input data xi, and expected values yi is as follows:

Numerical optimization

M10: The generation of gradient equations with a Jacobian J matrix (refer to the Mathematics section in the Appendix A, Basic Concepts) after minimization of the loss function L is defined as follows:

Numerical optimization

M11: The iterative approximation using the Taylor series on the model f for k iterations on the computation of weights w is defined as follows:

Numerical optimization

The logistic regression is a nonlinear function. Therefore, it requires the nonlinear minimization of the sum of least squares. The optimization algorithms for the nonlinear least squares problems can be divided into two categories:

  • Newton (or 2nd order techniques): These algorithms calculate the second order derivatives (the Hessian matrix) to compute the regression weights that nullify the gradient. The two most common algorithms in this category are the Gauss-Newton and Levenberg-Marquardt methods (refer to the Nonlinear least squares minimization section in the Appendix A, Basic Concepts). Both algorithms are included in the Apache Commons Math library.
  • Quasi-Newton (or 1st order techniques): First order algorithms do not compute but estimate the second order derivatives of the least squares residuals from the Jacobian matrix. These methods can minimize any real-valued functions, not just the least squares summation. This category of algorithms includes the Davidon-Fletcher-Powell and the Broyden-Fletcher-Goldfarb-Shannon methods (refer to the Quasi-Newton algorithms section in the Appendix A, Basic Concepts).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset