Getting ready

The following code is required to get ready before we proceed:

scipy.optimize.newton_krylov(F, xin, iter=None, rdiff=None, method='lgmres', inner_maxiter=20, inner_M=None, outer_k=10, verbose=False, maxiter=None, f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, tol_norm=None, line_search='armijo', callback=None, **kw)

In the following table can see the parameters and their description of the Anderson method

Parameters

Ffunction(x) -> f. Function whose root to find; should take and return an array-like object.

xin: array_like. Initial guess for the solution.

rdiff: float, optional. Relative step size to use in numerical differentiation.

method: {'lgmres', 'gmres', 'bicgstab', 'cgs', 'minres'} or function. Krylov method to use to approximate the Jacobian. Can be a string, or a function implementing the same interface as the iterative solvers in scipy.sparse.linalg.

The default is scipy.sparse.linalg.lgmres.

inner_M: LinearOperator or InverseJacobian

Preconditioner for the inner Krylov iteration. Note that you can also use inverse Jacobians as (adaptive) preconditioners. For example:

from scipy.optimize.nonlin import BroydenFirst, KrylovJacobian
from scipy.optimize.nonlin import InverseJacobian
jac = BroydenFirst()
kjac = KrylovJacobian(inner_M=InverseJacobian(jac))

If the preconditioner has a method named update, it will be called as update(x, f) after each non-linear step, with x giving the current point, and f the current function value.

inner_tol, inner_maxiter, ...

Parameters to pass on to the inner Krylov solver. See scipy.sparse.linalg.gmres for details.

outer_k: int, optional. Size of the subspace kept across LGMRES non-linear iterations. See scipy.sparse.linalg.lgmres documentation for details.

iter: int, optional. Number of iterations to make. If omitted (default), make as many as required to meet tolerances.

verbose: bool, optional. Print status to stdout on every iteration.

maxiter: int, optional. Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.

f_tol: float, optional. Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.

f_rtol: float, optional. Relative tolerance for the residual. If omitted, not used.

x_tol: float, optional. Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.

x_rtol: float, optional. Relative minimum step size. If omitted, not used.

tol_norm: function(vector) -> scalar, optional. Norm to use in convergence check. Default is the maximum norm.

line_search: {None, 'armijo' (default), 'wolfe'}, optional. Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to armijo.

callback: Function, optional. Optional callback function. It is called on every iteration as callback(x, f) where x is the current solution and f the corresponding residual.

Returns

sol: ndarrayAn array (of similar array type as x0) containing the final solution.

Raises

NoConvergence. When a solution was not found.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset