SciPy implementations

Before starting on writing your root-finding algorithm to solve nonlinear or even linear problems, take a look at the documentation of the scipy.optimize methods. SciPy contains a collection of scientific computing functions as an extension of Python. Chances are that these open source algorithms might fit into your applications off-the-shelf.

Root-finding scalar functions

Some root-finding functions that can be found in the scipy.optimize modules are bisect, newton, brentq, and ridder. Let's set up the examples that we have discussed using the implementations by SciPy:

"""
Documentation at
http://docs.scipy.org/doc/scipy/reference/optimize.html
"""
import scipy.optimize as optimize

y = lambda x: x**3 + 2.*x**2 - 5.
dy = lambda x: 3.*x**2 + 4.*x

# Call method: bisect(f, a, b[, args, xtol, rtol, maxiter, ...])
print "Bisection method: %s" 
      % optimize.bisect(y, -5., 5., xtol=0.00001)

# Call method: newton(func, x0[, fprime, args, tol, ...])
print "Newton's method: %s" 
      % optimize.newton(y, 5., fprime=dy)
# When fprime=None, then the secant method is used.
print "Secant method: %s" 
      % optimize.newton(y, 5.)

# Call method: brentq(f, a, b[, args, xtol, rtol, maxiter, ...])
print "Brent's method: %s" 
      % optimize.brentq(y, -5., 5.) 

When we run the preceding code, the following output is generated:

Bisection method: 1.24190330505 
Newton's method: 1.24189656303 
Secant method: 1.24189656303 
Brent's method: 1.24189656303

We can see that the SciPy implementation gives us somewhat very close answers as our derived ones.

It should be noted that SciPy has a set of well-defined conditions for every implementation. For example, the function call of the bisection routine is given as follows:

scipy.optimize.bisect(f, a, b, args=(), xtol=1e-12, rtol=4.4408920985006262e-16, maxiter=100, full_output=False, disp=True)

The function will strictly evaluate the function f to return a zero of the function. Root-finding scalar functions and Root-finding scalar functions cannot have the same signs. In certain scenarios, it is difficult to fulfill these constraints. For example, in solving for nonlinear implied volatility models, volatility values cannot be negative. In active markets, finding a root or a zero of the volatility function is almost impossible without modifying the underlying implementation. In such cases, implementing our own root-finding methods might perhaps give us more authority over how our application should behave.

General nonlinear solvers

The scipy.optimize module also contains multidimensional general solvers that we can harness to our advantage. The root and fsolve functions are some examples with the following function properties:

  • root(fun, x0[, args, method, jac, tol, ...]): This finds a root of a vector function
  • fsolve(func, x0[, args, fprime, ...]): This finds the roots of a function

The outputs are returned as a dictionary object. Using our example again as inputs to these functions, we will get the following output:

>>> import scipy.optimize as optimize

>>> y = lambda x: x**3 + 2.*x**2 - 5.
>>> dy = lambda x: 3.*x**2 + 4.*x

>>> print optimize.fsolve(y, 5., fprime=dy)
[ 1.24189656]
>>> print optimize.root(y, 5.)
 status: 1
 success: True
     qtf: array([ -3.73605502e-09])
    nfev: 12
       r: array([-9.59451815])
     fun: array([  3.55271368e-15])
       x: array([ 1.24189656])
 message: 'The solution converged.'
    fjac: array([[-1.]]) 

Using an initial guess value of 5, our solution converged to the root at 1.24189656, which is pretty close to the answers we had so far. What happens when we choose a value on the other side of the graph? Let's use an initial guess value of -5:

>>> print optimize.fsolve(y, -5., fprime=dy)
[-1.33306553]
>>> print optimize.root(y, -5.)
  status: 5
 success: False
     qtf: array([ 3.81481521])
    nfev: 28
       r: array([-0.00461503])
     fun: array([-3.81481496])
       x: array([-1.33306551])
 message: 'The iteration is not making good progress, as measured by the 
  improvement from the last ten iterations.'
    fjac: array([[-1.]])

As seen from the display output, the algorithms did not converge and return a root that is a little further away from our previous answers. If we take a look at the equation on a graph, there are a number of points along the curve that lie very close to the root. A root-finder would be needed to obtain the desired level of accuracy, while solvers attempt to solve for the nearest answer in the fastest time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset