Appendix A

Maximum Likelihood Estimation

(Source: Franses and Paap [1])

In the estimation of maximum likelihood, the likelihood function is defined as

(A.1) equation

where img is the joint density distribution for the observed variables img given img, and img summarizes the model parameters img to img. The logarithmic likelihood function is

(A.2) equation

Since it is not possible to find an analytical solution for the value of img that maximizes the log-likelihood function, the maximization has to be done using a numerical optimization algorithm. Here, Franses and Paap [1] prefer the Newton–Raphson method, which introduces the gradient img and the Hessian matrix img as

(A.3) equation

(A.4) equation

Around a given value img the first-order condition for the optimization problem can be linearized, resulting in img. Solving this gives the sequence of estimates

(A.5) equation

where img and img are the gradient and Hessian matrix evaluated in img.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset