Appendix A
Maximum Likelihood Estimation
(Source: Franses and Paap [1])
In the estimation of maximum likelihood, the likelihood function is defined as
(A.1)
where is the joint density distribution for the observed variables given , and summarizes the model parameters to . The logarithmic likelihood function is
(A.2)
Since it is not possible to find an analytical solution for the value of that maximizes the log-likelihood function, the maximization has to be done using a numerical optimization algorithm. Here, Franses and Paap [1] prefer the Newton–Raphson method, which introduces the gradient and the Hessian matrix as
(A.3)
(A.4)
Around a given value the first-order condition for the optimization problem can be linearized, resulting in . Solving this gives the sequence of estimates
(A.5)
where and are the gradient and Hessian matrix evaluated in .