The problem of finding polynomial zeros,polynomial zeros although one of the oldest problems in mathematics, is still important since it appears in a vast variety of branches of mathematics and technical science, physics, computer science, control theory, signal processing, even in economics and social sciences. More details can be found in McNamee’s book “Numerical Methods for Roots of Polynomials” McNamee, 2007. The bibliography compiled by McNamee contains about 8000 entries. Although any method for finding a single root of an equation can be applied to a polynomial equation as well, the method of deflation is not a good choice when all zeros are required since it produces approximations of descending accuracy. It is more fruitful to apply a simultaneous method which will determine all zeros at the same time and with, more or less, the same accuracy. A lot of methods for the simultaneous determination of polynomial zeros in ordinary complex arithmetic as well as in interval arithmetic are given in the Springer book “Iterative Methods for Simultaneous Inclusion of Polynomial Zeros” Petković, 1989.
This chapter is devoted to simultaneous methods for polynomial zeros. We have restricted our attention to the methods with corrections. The order of convergence of basic methods is limited, that is, their computational efficiency cannot be improved. An important advance is attained by applying multipoint methods for a single zero (at most of fourth order) for accelerating simultaneous iterative methods, which increases the convergence rate with only few additional computations. In this way the computational efficiency of simultaneous methods is considerably improved. This approach was applied first by Nourein, 1977a in complex arithmetic, and by Petković and Carstensen, 1993a in circular complex arithmetic (arithmetic of disks). Proceeding in this manner, we present in this chapter several simultaneous methods for finding simple and multiple complex zeros of polynomials using optimal two-point methods to accelerate the convergence.
In this section we present two iterative methods of high computational efficiency for finding all (real or complex) zeros of a polynomial, simultaneously. First method is constructed using a multipoint interpolating technique and has a local cubic convergence, see Petković, 2009a. Second method cobines the Ehrlich-Aberth method of third order and a class of two-point methods of optimal order four for finding a simple root, considered in Section 2.3.
Although we say a zero of a function and a root of the equation , in the case of algebraic polynomials, the zero and root formulation are used interchangeably in this chapter. We preserve the notion zero-relation to denote a relation which gives an expression for a zero of a polynomial.
Let be an approximation to a simple zero ζ of a function , which is at least twice differentiable in a neighborhood of ζ. In a similar manner as in Section 2.1, we construct a quadratic interpolating polynomial ϕ for such that
Traub, 1964 showed that the function ϕ of the form
(7.1)
fulfils these conditions, see (2.8).
Let be a point satisfying the condition . Then from (7.1) it follows:
(7.2)
Solving the Equation (7.2) for , we obtain
(7.3)
This is an implicit relation in so that on the right-hand side of (7.3) should be estimated by some approximation of the zero ζ.
We restrict our consideration to algebraic (monic) polynomials of degree with simple zeros having reasonably close approximations . Recall that one of the most frequently used iterative methods for the simultaneous approximation of polynomial zeros is the quadratically convergent Weierstrass’ (or Durand-Dochev-Kerner) method (see Durand, 1960; Dochev and Bulgarian, 1962; Kerner, 1966),
(7.4)
where are new approximations and is the index set. In fact, Weierstrass did not use (7.4) for the numerical computation of polynomial zeros, Durand et al., 1960 in an implicit form and Dočev Dochev and Bulgarian), 1962 were the first to apply the iterative method (7.4) in practice.
Define Weierstrass’ function by
where the index always takes all values (except explicitly cited) from the set in products and sums which appear in the sequel. For we will write and call Weierstrass’ correction.
Let us return to (7.3) and put and on the right-hand side of (7.3). In this way the following method has been obtained in Petković, 2009a,
(7.5)
The iterative formula (7.5) produces new (presumably improved) approximations to the zeros of , simultaneously. A similar method was constructed in Petković and Petković, 2007b. Introducing the iteration index , we get the simultaneous method (7.5) in the following form
(7.6)
where
Our objective is the study of properties of the method (7.6), including convergence analysis and computational aspects.
The computation of Weierstrass’ corrections in each iteration also enables us to control the upper error bounds of approximations. Let be approximations calculated by the iterative formula (7.6).
Under suitable computationally verifiable conditions, which can be easily accomplished in practice, each of the disks defined by
contains the corresponding zero . This means that
The value of the multiplier depends on the polynomial degree and the initial conditions, see Petković, (2008 Section 3.4) and Petković et al. (2007). Therefore, the radius can be taken as the upper error bound of . Since is calculated in the iterative formula (7.6), no additional computations are needed, except in the last iteration. In a similar way as in Petković et al., 2007, it can be shown that the sequences of radii converge cubically to 0.
Now we are going to analyze the convergence rate of the method (7.6). For simplicity, we will omit the iteration index in our analysis and consider the iterative formula (7.5).
Theorem 7.1
If are sufficiently close approximations to the zeros of a polynomial , then the order of convergence of the iterative method 7.5) is three.
Proof
In the convergence analysis we will use the notation for two complex numbers and whose moduli are of the same order, that is, .
Let and . According to the assumption of the theorem, the errors are sufficiently small in moduli. Let us assume that the errors are of the same order in moduli and let , where is the error such that . Similarly, . Observe that
(7.7)
In our proof we use the representation of a polynomial by its Lagrange’s interpolation at the nodes ,
(7.8)
Applying the logarithmic derivative to (7.8) we obtain
Hence, putting in the above formula,
and
(7.9)
Using Taylor’s series, the estimation (7.9) and the expansion into the geometric series, from (7.5) we find
, by (7.9),
(7.10)
where
defines the Chebyshev iterative method of third order (1.21), that is,
(7.11)
According to (7.10) and (7.11), we obtain
(7.12)
since is the dominant term. From (7.12) we conclude that the order of convergence of the simultaneous method (7.5) is 3, which completes the proof of Theorem 7.1.
Example 7.1
The following example demonstrates very good convergence behavior of the proposed method (7.6) in a global sense in the presence of very crude initial approximations. The initial approximations are chosen using Aberth’s approach Aberth, 1973,
(7.13)
where is the coefficient of the polynomial
and is the radius of a circle containing all zeros of . Initial approximations generated by (7.13) are equidistantly spaced on a circle with radius , see Figure 7.1 for a polynomial of 15th degree. In practice, is often calculated by
providing that the inclusion disk , centered at the origin, contains all zeros of the polynomial , see Henrici et al., (1974, p. 457).
Figure 7.1 Example 7.1 – trajectories of approximations
The method (7.6) has been applied for finding all zeros of the polynomial
of Wilkinson’s type (Wilkinson, 1963) with real zeros and . It is well-known that this class of polynomials is ill-conditioned, namely, small perturbations of polynomial coefficients cause considerable variations of zeros. Notice that many iterative methods encounter serious difficulties in finding the zeros of Wilkinson-like polynomials.
We have started with Aberth’s initial approximations (7.13) taking and . The iterative process has been terminated upon satisfying the stoping criterion
The behavior of the simultaneous method (7.6) is shown graphically in Fig. 7.1. The method (7.6) converges linearly at the beginning of the iterative procedure and its middle phase, but almost straightforwardly toward the zeros, with radially distributed approximations. The method demonstrates the cubic convergence in several final iterations.
The computational efficiency of simultaneous methods can be considerably increased by the use of suitable corrections. We now present a family of iterative methods with corrections for the simultaneous determination of polynomial simple zeros. Methods with corrections are ranked the most efficient among existing methods in the class of simultaneous methods based on fixed point relations. The presented iterative formula relies on a fixed point relation of Gargantini-Henrici’s type Gargantini and Henrici, (1972). A very high computational efficiency is attained using suitable corrections which arise from a class of two-point methods of order four (see Chapter 2) with minimal computational costs.
Let be a monic polynomial of degree with simple real or complex zeros and let
(7.14)
be Newton’s correction appearing in the quadratically convergent Newton’s method. To construct an iterative method for the simultaneous inclusion of polynomial zeros, Gargantini and Henrici Gargantini and Henrici, (1972) started from (7.14) and derived the following zero-relation
(7.15)
In the literature, relations of this type are usually called fixed-point relations in a wider sense, so that we will use this term too. Given distinct approximations to the zeros , we set and substitute the zeros by some approximations on the right-hand side of (7.15). In this manner, the following general iterative method
(7.16)
for the simultaneous determination of all simple zeros of the polynomial is obtained.
The substitution in (7.16) gives the well-known Ehrlich-Aberth method
(7.17)
studied by Ehrlich (1967) and Aberth (1973). The latter paper includes the proof of cubic convergence of the simultaneous method (7.17).
Comparing (7.15) and (7.16) it is evident that better approximations produce more accurate approximations indeed, if , then . This idea was employed by Nourein, 1977b for the construction of a fourth-order method; he substitutes Newton’s approximations in (7.16) to obtain the accelerated method, often called Nourein’s method,
(7.18)
We extend this approach to state a family of sixth-order methods.
Let be a real or complex function continuous together with its derivatives and in a neighborhood of 0, and let
Assume that the approximations in (7.16) are determined as follows:
(7.19)
In fact, (7.19) defines the family of two-point fourth-order methods developed in Petković and Petković, 2010 (see (2.74)).
Introducing the approximations (7.19) into (7.16), we obtain the corresponding simultaneous method of the form
(7.20)
for . As discussed in Section 2.3, the function can take different forms that satisfy (pretty relaxed) conditions
Therefore, the iterative formula (7.20) defines a family of simultaneous methods.
If are initial approximations to the polynomial zeros , then the family of iterative simultaneous methods is defined in the following way:
(7.21)
where and ψ is defined in (7.19).
Remark 7.1
To decrease the total computational cost, before executing any iteration step it is necessary first to calculate all approximations
In this way we avoid the repetition of computations under the sum in (7.21).
Now we state the convergence theorem that gives necessary and sufficient conditions imposed on the function o provide as high as possible order of convergence of the simultaneous method (7.21).
Theorem 7.2
Let be any sufficiently differentiable function satisfying and . If are initial approximations close enough to the distinct zeros , then the order of convergence of the family of simultaneous methods (7.21) is 6.
Proof
For simplicity, we omit the iteration index and denote all quantities in the st iteration with the symbol . Let as in the previous chapters but now with the additional subscript index. For brevity, let
Then, starting from (7.20) and using (7.14) we obtain
(7.22)
and hence
(7.23)
The following error relation for the family (7.19) is given in Theorem 2.6,
(7.24)
under the conditions , and bounded. Following the conditions of Theorem 7.2 we assume that for any pair and let be the error of maximal modulus. Then, according to (7.24) and the expression for , we have and from (7.23) we find
which means that the order of convergence of the family of simultaneous methods (7.20) is 6.
The family of simultaneous methods (7.21) includes some special cases which can be obtained by taking different particular forms of the function , assuming that the argument of may be real or complex. Some of the special cases for are listed in Section 2.3 to produce Methods 2.1–2.6 so we omit them here. Taking these special choices of , we get from (7.21) particular simultaneous methods.
From the practical point of view, it is of great importance to estimate the relevant features of root-finding methods such as:
• number of necessary numerical operations in computing zeros with the required accuracy;
• total running time of a complete iterative process on a computer;
These properties are closely connected to the computational efficiency. The knowledge of the computational efficiency is of particular interest in designing packages of root-solvers. More details on this topic may be found in Milovanović and Petković (1986), and Petković, (1989a, Chapter 6).
We are going to compare the convergence behavior and computational efficiency of Ehrlich-Aberth’s method (7.17), Nourein’s method (7.18) and the new family of simultaneous methods (7.20). This comparison procedure is entirely justified since the analysis of efficiency given in (Petković, 1989a Chapter. 6) for several computing machines showed that Nourein’s method (7.18) has the highest computational efficiency in the class of simultaneous methods based on fixed point relations (that is, zero-relations).
As presented by McNamee (2007, Chapter 1) and Petković, (1989a, Chapter 6), the efficiency of an iterative method can be successfully estimated by the efficiency index
(7.25)
where is the -order of convergence of the iterative method and is the computational cost. The rank list of methods obtained by this formula mainly matches well the real CPU (central processor unit) time, see Milovanović and Petković, (1986). The same rank follows from the formula , often used in the previous chapters.
As noted in Section 1.3, in order to evaluate the computation cost of simultaneous methods it is preferable to handle the number of arithmetic operations per iteration with certain weights,depending on their execution times. Denote these weights with and for addition/subtraction, multiplication and division, respectively. Let and be the number of additions + subtractions, multiplications and divisions per one iteration for all zeros of a given polynomial of degree . Then the computational cost can be (approximately) expressed as
(7.26)
and from (7.25) and (7.26) we obtain
(7.27)
In practice, the weights depend on characteristics of the employed processor (hardware) and the algorithm applied in executing basic arithmetic operations (software).
We consider real polynomials with real zeros for simplicity. The analysis of computational efficiency in the case of complex polynomials with real or complex zeros is essentially similar, although it is slightly tedious since we are usually obliged to reduce calculations to real arithmetic operations. The numbers of basic operations in real arithmetic are given in Table 7.1 as functions of the polynomial degree .
For the sake of comparison of the simultaneous methods (7.17), (7.18) and (7.20), we have used the weights (appearing in (7.27)) determined according to the estimated complexity of basic operations in multiple precision arithmetic. Without loss of generality, we assume that floating-point number representation is used, with a binary fraction of bits. In other words, we deal with “precision” numbers, giving results with the relative error of approximately . Following results given in Brent and Zimmermann (2011), the execution time and of addition and subtraction is . Using Schönhage-Strassen multiplication (see Fousse et al., 2007; Schönhage and Strassen, 1971), often implemented in multiple-precision libraries (used, for instance, in the computational software packages Mathematica, Maple, Magma), we have
We have chosen the weights and proportionally to and , respectively, for 64-bit architecture.
Applying (7.27) we calculated the percentage ratios
where EA, N, and F stand for Ehrlich-Aberth’s method (7.17), Nourein’s method (7.18) and the new family (7.20), respectively. These ratios are graphically presented in Fig. 7.2 as functions of the polynomial degree and show the (percentage) improvement of computational efficiency of the new method (7.20) relative to the methods (7.17) and (7.18). In Fig. 7.2 the graph of is drawn by solid line and of by dotted line. Similar curves are obtained with the weights proportional to the execution times of basic operations for octuple precision (machine epsilon ) for Pentium M 2.8 GHz running Fedora core 3 and Opteron 64-bit processor (data taken from Fujimoto et al., 2005).
Figure 7.2 The ratios of efficiency indices
It is obvious from Fig. 7.2 that the new method (7.20) is more efficient than the methods (7.17) and (7.18). The improvement is especially evident in regard to the Ehrlich-Aberth method (7.17) (F/EA% – solid line). Having in mind the aforementioned fact on the dominant efficiency of the Nourein method (7.18), it follows that the proposed family of simultaneous methods (7.20) generates the most efficient methods for the simultaneous determination of polynomial zeros in the class of methods based on fixed point relations.
To demonstrate the convergence behavior of the methods (7.17), (7.18) and (7.20), we have tested a number of polynomial equations. Among many tested algebraic polynomials we have selected two examples, realized in Mathematica with multiple-precision arithmetic.
As a measure of accuracy of the obtained approximations, we have calculated Euclid’s norm
(7.28)
where and . Tables given below also contain the computational order of convergence , evaluated by the following formula (see Weerakoon and Fernando, 2000)
(7.29)
Although this formula is derived for methods which find a single root, it gives mainly acceptable results for simultaneous methods.
Example 7.2
We have applied the iterative methods (7.17), (7.18) and (7.20) for the simultaneous approximation of the zeros , of the polynomial
the case of the family (7.20), we have used six two-step methods (Method 2.1–Method 2.6) given in Section 2.3, characterized by the functions defined in the continuation by (7.56), (7.57), (7.58), (7.59), (7.60), (7.61) in Section 7.3. Initial approximations have been taken in such a way as to give . The error norms calculated by (7.28) and the computational order of convergence evaluated by (7.29) are given in Table 7.2, where means , as before.
Example 7.3
The same methods (7.17), (7.18) and (7.20) (with ) have been applied for the simultaneous determination of zeros of the polynomial of the 21st degree:
The selected initial approximations yield . The error norms and the computational order of convergence are given in Table 7.3.
From Tables 7.2 and 7.3 and a number of tested polynomial equations we can conclude that the proposed family (7.20) produces approximations of considerably great accuracy; two iterative steps are usually sufficient in solving most practical problems when initial approximations are reasonably good and polynomials are well-conditioned. We observe that the computational order of convergence evaluated by (7.29) (the last column of Tables 7.2 and 7.3) mainly fits well the theoretical order of convergence of all considered methods.
In the previous section we have shown how multipoint methods can be successfully applied for constructing iterative methods for the simultaneous determination of simple zeros of polynomials. Considering the fixed point relation (7.15), the following question arises: Is it possible to construct a sixth-order simultaneous method for finding multiple zeros? Derivation of such a method was impossible up to recently since optimal methods for multiple roots of the fourth-order requiring only three function evaluations, similar to (7.16), were not yet developed. However, Li et al. (2009b) stated a two point method for a multiple root with optimal order 4, which enabled the construction of a simultaneous method of very high computational efficiency for approximating polynomial multiple zeros. Employing this two-point method, in this section we state a method of order 6.
Remark 7.2
Li-Liao-Cheng’s method Li et al., (2009b) was recently generalized by Zhou et al. (2011), see (2.163) in Section 2.7. However, particular methods from the family (2.163) are of the same efficiency so that we have taken Li-Liao-Cheng’s method, which is also a member of the family (2.163), for its simple form. The family (2.163) is considered in Section 7.5 for accelerating Halley-like inclusion method.
Let be a monic polynomial of degree with multiple real or complex zeros of respective multiplicities . Then
(7.30)
We single out the term from (7.30) and derive the following zero-relation
(7.31)
which serves as the basis for the construction of a simultaneous method for the determination of polynomial multiple zeros. This relation was also used in Gargantini, 1978 for the construction of iterative methods for the simultaneous inclusion of multiple zeros of polynomials in complex circular arithmetic.
Let be distinct approximations to the zeros . Setting and replacing by some approximations on the right-hand side of (7.31), one obtains the following iterative method
(7.32)
for the simultaneous determination of all multiple zeros of the polynomial . Here denotes a new approximation to the zero . The choice in (7.32) gives the third-order method of Ehrlich-Aberth’s type for multiple zeros
(7.33)
It is interesting to note that the method (7.33) in ordinary complex arithmetic was constructed after a more complicated method of the same type in circular complex arithmetic, see Gargantini (1978).
Furthermore, putting Schröder’s approximations in (7.32), the following accelerated method of fourth order is obtained (see Milovanović and Petković, 1983),
(7.34)
Note that the iterative method (7.34) reduces to Nourein’s method (7.18) in the case of simple zeros, see Nourein (1977b).
As mentioned in the previous section, analyzing (7.31)–(7.34) it is evident that the better approximations give the more accurate approximations . We apply this idea to construct a higher order method.
The fourth-order iterative method (7.34) is obtained using Schröder’s second order method
Further increase of the convergence order can be obtained by using methods of higher order for finding a single multiple root. Here we use the two-point method for finding a single multiple root proposed by Li et al. 2009b
(7.35)
where
being the multiplicity of the sought zero ζ of a function (not necessarily an algebraic polynomial in general), see (2.161). The order of convergence of the iterative method (7.35) is 4, that is,
(7.36)
holds (for the proof, see Li et al., (2009b) and Section 2.7).
In the sequel, we substitute by the approximation of and by the corresponding multiplicity of . The approximation appearing in (7.32) is calculated by (7.35), that is,
where we put
and
After these substitutions in (7.32) and introducing the iteration index , we obtain a method for the simultaneous approximation of all simple or multiple zeros of a given polynomial
(7.37)
where and the notation is similar to that of (7.21).
The convergence rate of the new method (7.37) is investigated in the following theorem.
Theorem 7.3
If initial approximations are sufficiently close to distinct zeros of a given polynomial, then the order of convergence of the simultaneous method (7.37) is 6.
Proof
The proof bears a resemblance to the proof of Theorem 7.2. We omit the iteration indices for simplicity. According to the conditions of Theorem 7.3, we can assume that for any pair . Let be the error of maximal modulus with .
For brevity, let
Then, starting from (7.37) and using (7.30), we obtain
(7.38)
and hence
(7.39)
According to (7.36) we have and from (7.39) we find
since the denominator of (7.39) tends to when . Therefore, the order of convergence of the simultaneous method (7.37) is 6.
Now we consider the convergence behavior and computational efficiency of the methods (7.33), (7.34) and the new simultaneous method (7.37). Following the analysis of efficiency given in (Petković, 1989a, Chapter 6] for several computing machines, we know that the method (7.34) has the highest computational efficiency in the class of simultaneous methods based on fixed point relations. There is no need to compare the new method (7.37) to other sixth-order methods for two reasons:
1) At present, there are no other simultaneous sixth-order methods of the form (7.32), except the method (7.37) with Li-Liao-Cheng’s two-point method (7.35) or other particular methods of the family (2.163).
2) The existing sixth-order methods for multiple zeros, such as Halley-like methods (Petković 1989b; Wang and Wu, 1987), are less efficient than the mentioned method (7.34), see (Petković, 1989a, Chapter 6].
Comparing iterative formulae (7.34) and (7.37) we notice that the new formula (7.37) requires ν new polynomial evaluations per iteration relative to (7.34). Hence we conclude that the minimal computational efficiency of iterative method (7.37) appears when , that is, when all zeros are simple. This “worst case” in our computational analysis has been already performed in Section 7.1 with the conclusion that the method (7.37) (which is, in the case of simple zeros, equivalent to the method (7.20)) possesses the highest computational efficiency.
The convergence behavior of the methods (7.33), (7.34) and the new simultaneous method (7.37) is illustrated in the following three examples.
Example 7.4
The methods (7.33), (7.34) and (7.37) have been applied for the simultaneous approximation to the zeros of the polynomial
The zeros of this polynomial are with the respective multiplicities , . We have selected the initial approximations
The entries of the error norms calculated by (7.28) for the first three iterations are given in Table 7.4.
Example 7.5
The same methods (7.33), (7.34) and (7.37) have been applied for the simultaneous approximation to the zeros of the polynomial.
The zeros of this polynomial are with the respective multiplicities 2, 3, 2, 2, 2, 2, 3, 2. The following starting approximations have been selected
The entries of the error norms obtained in the first three iterations are given in Table 7.5.
Example 7.6
In order to find the zeros of the polynomial
we have applied the same methods as in Examples 7.4 and 7.5. The zeros of this polynomial are , with multiplicities 2, 3, 2, 2, 3, 2, 2, 2, 2, respectively. The starting approximations are
The error norms obtained in the first three iterations are given in Table 7.6.
From Tables (7.4)–(7.6) and a number of tested polynomial equations we can conclude that the proposed family (7.37) produces approximations of considerably high accuracy; two iterative steps are usually sufficient in solving most practical problems. The presented analysis of computational efficiency shows that the family (7.37) is more efficient than the existing methods for multiple zeros based on fixed point relations.
The use of finite precision arithmetic on digital computers does not normally produce any information about the accuracy of computed approximate results. Interval methods, supplied with very useful self-validating property, most frequently resolve this problem. In particular, iterative methods for the simultaneous determination of complex zeros of a given polynomial, realized in complex interval arithmetic, are very efficient devices in deriving error estimates for a given set of approximate zeros. These methods belong to the class of self-validated algorithms that produce complex intervals (disks or rectangles) containing the sought zeros. In this manner information about upper error bounds of approximations to the zeros is provided (see, e.g., Alefeld and Herzberger (1983), Petković (1989a), Petković and Petković (1998) for more details). Besides, there exists the ability to incorporate rounding errors without altering the fundamental structure of the applied interval method. Moreover, some practical problems of applied and industrial mathematics, whose mathematical models include algebraic polynomials with uncertain coefficients as a consequence of inaccurate measurements or uncertain values of parameters, can be effectively solved by applying interval methods.
To develop and analyze interval methods for the inclusion of polynomial zeros, we list here some basic properties of circular complex arithmetic. More details about this type of interval arithmetic, the so-called arithmetic of disks, can be found in the books Alefeld and Herzberger (1983), and Petković and Petković (1998).
A circular closed region (disk) with center and radius will be denoted by the parametric notation . The set of all disks is denoted with . The basic circular arithmetic operations are defined as follows:
inversion of a nonzero disk is defined by the Möbius transformation,
(7.40)
Beside the exact inversion of a disk , the so-called centered inversion defined by
(7.41)
is often used. Although the centered inversion gives larger disks than the exact inversion , we will see later that the centering property of the inversion plays a key role in increasing the convergence speed of inclusion methods with corrections.
Using (7.40) and (7.41) division is defined by
An interval function is called complex circular extension of a complex function if
If the implication
holds, then is inclusion isotone. In particular, we have
It should be emphasized that all introduced interval arithmetic operations possess the property of inclusion isotonicity,that is,
The main advantage of iterative methods for the simultaneous inclusion of polynomial zeros is the property that all produced interval approximations (in the form of real intervals, rectangles or disks) contain the sought zeros, providing in this way the upper error bound of approximations (by semi-width, semi-diagonal or radius, respectively). For this property, interval methods of this type are often called inclusion methods or self-validated methods.
To obtain enclosures of complex zeros, it is preferable to employ circular complex interval arithmetic, as carried out in Sections 7.3–7.5. As in Sections 7.1 and 7.2, we are concentrating on interval methods based on Gargantini-Henrici’s fixed-point relation (7.15). We take this relation for two reasons: it is simple and enables the construction of iterative methods of very high computational efficiency in ordinary complex arithmetic as well as in circular complex arithmetic.
First we present the Gargantini-Henrici iterative method (Gargantini and Henrici, 1972) for the simultaneous inclusion of simple zeros of polynomials. Let be a polynomial with simple zeros . Assume that disjoint disks such that have been found. Let us put in (7.15). Since , according to the inclusion isotonicity property it follows from (7.15) that
(7.42)
Let be initial disjoint disks containing the zeros , that is, for all . The relation (7.42) suggests the following method for the simultaneous inclusion of all simple complex zeros of the polynomial :
(7.43)
Here is the center of disk at the th iteration and . The interval method (7.43) was proposed by Gargantini and Henrici (1972). It is worth noting that this method was the first one in the literature intended for the simultaneous inclusion of polynomial complex zeros. The order of convergence of the method (7.43) is 3, independently of the applied type of disk inversion. This is not the case with inclusion methods with corrections, which will be discussed later.
The convergence rate of the inclusion iterative method (7.43) can be accelerated using the approach presented in Carstensen and Petković (1994) and Petković and Carstensen (1993). This approach is based on the use of suitable corrections arising from iterative methods for finding a single root of nonlinear equations. The increased order of convergence is obtained with minimal additional computational costs, for example, by employing already calculated quantities and only very little new information, which means that accelerated methods possess a high computational efficiency.
Let be a correction appearing in the iterative formula
which defines a root-finding iterative method. Then, if the implication
(7.44)
holds for all in each iterative step, we can modify the inclusion method (7.43) into a new form
(7.45)
Here denotes one of the disk inversions given by (7.40) and (7.41).
Various corrections lead to different inclusion methods. For example, taking in (7.45), we obtain the following inclusion method of fourth order with Newton’s corrections for the inclusion of simple zeros proposed in Carstensen and Petković (1994)
(7.46)
assuming that the centered inversion (7.41) is applied.
Remark 7.3
An important discussion about the use of disk inversion is necessary here. At first glance, the use of the exact inversion (7.40) in the iterative formulae (7.45) and (7.46) (and other interval formulae with corrections given later) seems reasonable since the exact inversion (7.40) gives smaller disks than the centered inversion (7.41). Surprisingly enough, the centered inversion applied to inclusion methods with corrections gives better results. Namely, the application of centered inversion produces sequences of midpoints of the resulting disks , which coincides with very fast iterative methods in ordinary complex arithmetic. These fast methods significantly force the contraction of inclusion disks since the convergence of midpoints and the convergence of radii are coupled. This obviously leads to the accelerated convergence of interval methods. On the other hand, the exact inversion gives the “shifted” midpoints of the inverted disks and slower convergence of the sequences of shifted midpoints in ordinary complex arithmetic so that the contraction of disks are not so good. For more details see Petković (2011a). Consequently, the use of exact inversion can accelerate the convergence only to a certain extent when corrections are used in (7.45). As illustration, note that the application of the exact inversion in (7.46) gives the -order , see Carstensen and Petković (1994). For this reason, henceforth we will work only with the centered inversion (7.41) without explicitly emphasized.
In what follows we present the inclusion method of Gargantini-Henrici’s type with corrections proposed in Petković et al. (2011d). In this section we restrict our consideration to polynomials with simple zeros and start from a wide class of two-point methods (7.19), already used in Section 7.1,
(7.47)
where . Recall that is an at least twice differentiable function that satisfies the conditions , and .
Let us go back to the general method (7.45). Let be the correction defined by (7.47) in the th iterative step. If
for all in each iterative step , then following (7.45) we construct a new inclusion method with corrections
(7.48)
where , and . The centered inversion is assumed in (7.48) and in all subsequent iterative formulae. As mentioned in Remark 7.1, in order to decrease the total computational cost, before executing any iteration step it is necessary to calculate all corrections in advance.
Now we present the convergence analysis of the interval method (7.48). Introduce the abbreviations
Where is given by (7.47). Here we assume that for any pair .
Lemma 7.1
The following relations are valid for the inclusion method (7.48):
Proof
Let . Then . By the relation
and the operations of circular interval arithmetic (with the centered inversion (7.41)), we find
The order of the family of two-point methods (7.19) is four (see Theorem 2.5 in Section 2.3), that is, . Besides, and , so that we have
(7.49)
and
(7.50)
Using the introduced abbreviations, the disks produced by the inclusion method (7.48) can be expressed in the form
According to this and using (7.49) and (7.50) we find that
and
In the convergence analysis of inclusion methods (7.48) with corrections and other inclusion methods of the same type, we will use the notions “sufficiently small disks” and “well separated disks.” This means that there exists a constant , depending only on the polynomial degree , such that the inequality
(7.51)
holds. This inequality defines sufficient conditions for the convergence and the validity of implication (7.44). The quantity η can be regarded as a measure of separation of disks, while defines size of disks. The inequality of the form (7.51) for initial disks has been stated for several inclusion methods given by Petković (1989a) and Petković and Petković (1998) to define computationally verifiable initial conditions that guarantee the convergence. Since such analysis is not of primary importance in this book, we will not deal with inequalities of this type, but we will use the above notions to emphasize the fulfilment of sufficient initial convergence conditions.
Now we state the following theorem for the inclusion methods (7.48).
Theorem 7.4
Let be well separated and sufficiently small initial disks and . Then the lower bound of the order of convergence of the family of inclusion methods (7.48) is 6.
Proof
For simplicity, as usual in this type of analysis, we adopt the relation , which means that we deal with the “worst case” model. This assumption does not influence the final result of the limit process that we apply in order to obtain the lower bound of the -order of convergence. Furthermore, the assumption of the theorem that the initial disks are sufficiently small implies that their centers are close enough to the sought zeros; hence, the two-point method (7.47) has order 4.
By virtue of Lemma 7.1 we note that the sequences of centers and radii behave as follows
where the notation means . From these relations and Theorem 1.5, we form the -matrix
which has the spectral radius and the corresponding positive eigenvector . Hence, according to Theorem 1.5, we obtain
The convergence of the methods (7.43), (7.46) and (7.48) can be accelerated by applying the Gauss-Seidel approach which makes use of circular approximations calculated in the same iterative step. In this manner, starting from the general method (7.45) and the corrections and given by (7.47), we obtain the following methods in a serial mode
(7.52)
(7.53)
(7.54)
where and . The inclusion iterative methods (7.43), (7.45), (7.46) and (7.48) are realized in a parallel mode and they are often called total-step methods. The methods (7.52)–(7.54) with the Gauss-Seidel approach are called single-step methods. The single-step method (7.52) was considered in Petković, 1989, the method (7.53) in Carstensen and Petković, 1994, and the method (7.54) in Petković et al., 2011.
We now estimate the bounds of -order of convergence of the single-step methods (7.52), (7.53), and, (7.54). Finding the -order of convergence of this method for a specific is a very difficult task since sequences of centers and radii and the number of zeros are involved in the convergence analysis. However, we can determine easily the range of limit bounds of the -order taking the limit cases and very large . In the latter case the convergence rate of a single-step method becomes almost the same as that of the corresponding total-step method when the polynomial degree is very high.
According to Theorem 7.4 and the results given by Gargantini and Henrici (1972) and Carstensen and Petković (1994) for the total-step methods, we have for very large
(7.55)
consider now the single-step methods (7.52)–(7.54) for which is, actually, the application of these methods to a trivial case of a quadratic polynomial with zeros and . Assume that (the ”worst case” model). Introduce the errors
related to the methods (7.52)–(7.54), respectively, and indicated by the superscript indices and 3. In fact, the quantities and estimate the closeness of the centers of respective disks and to the zero .
After a somewhat tedious but elementary calculation we derive the following estimates:
The corresponding -matrices and their spectral radii and eigenvectors are:
to (7.55), Theorem 1.5 and the above -matrices, we can formulate the following assertion.
Theorem 7.5
The ranges of the lower bounds of -order of convergence of the single-step methods (7.52), (7.53) and (7.54) are
The comparison of computational efficiency of the interval methods (7.43), (7.46) and (7.48) has been performed in a similar way as in Section 7.1 by calculating the percentage ratios for a 128-bit architecture ,
where New, GH and GN indicate the new family (7.48), the Gargantini Henrici method (7.43) and the Gargantini method with Newton’s corrections (7.46), respectively.
The ratios (solid line) and (dashed line) are graphically presented in Fig. 7.3 as functions of the polynomial degree . These ratios show the improvement of computational efficiency of the new method (7.48) relative to the methods (7.43) and (7.46), expressed in percentage. The improvement is especially prominent in relation to the Gargantini Henrici method (7.43).
Figure 7.3 The ratios of efficiency indices
To demonstrate the convergence behavior of the studied methods, we have applied the inclusion methods (7.43), (7.46) and (7.48) and their single step variants for simple roots to many polynomial equations taking the following six functions (also listed in Section 2.3 to define Methods 2.1–2.6):
(7.56)
(7.57)
(7.58)
(7.59)
(7.60)
(7.61)
Where are arbitrary parameters and is an arbitrary rational number. These functions, appearing in (7.47), are chosen to satisfy the aforementioned conditions , and , have a simple form and deal with variable parameters (except (7.61)).
The enclosures of the zeros in the second and third iteration, which produce very small disks, are provided by the use of computational software Mathematica with multi-precision arithmetic.
Example 7.7
To find circular inclusion approximations to the zeros of the polynomial
we have implemented the inclusion methods (7.43), (7.46) and (7.48) and their single-step variants (7.52), (7.53) and (7.54). The zeros of are . We have selected the initial disks , with centers
The maximal radii of the inclusion disks, produced in the first three iterations, are given in Tables 7.7 and 7.8.
The interval methods presented in the previous section can be suitably modified for application to the inclusion of multiple zeros.
Let be simple or multiple zeros of the respective multiplicities of a monic polynomial ,and let be initial inclusion disks containing these zeros, that is, . The zero-relation (7.31) suggests the following method for the simultaneous inclusion of all simple or multiple zeros of the polynomial
(7.62)
where . This method was proposed by Gargantini (1978) and has a cubic convergence.
Using Schröder’s method of second order
for finding a multiple zero of multiplicity of a function and the zero-relation (7.31), the following iterative method with Schröder’s corrections of fourth order for the simultaneous inclusion of multiple zeros of polynomials was constructed by Carstensen and Petković (1994):
(7.63)
Our goal is to achieve further acceleration of the method (7.63) using only a few additional numerical operations and, in this manner, to increase the total computational efficiency of the new method. In Section 7.3 we have derived the simultaneous inclusion methods of sixth order using optimal fourth-order two-step method. In this section we consider another challenging task, the construction of a simultaneous method of the same type but for the inclusion of multiple zeros. We will derive such a method using the optimal two-point method (7.35) for multiple roots of order four, already utilized in Section 7.2.
Following the derivation of the simultaneous method (7.48), we start from the zero-relation (7.31) and construct a new iterative interval method for multiple zeros of polynomials. We replace the disk appearing in (7.62) by a new disk calculated by
where we put and
In this manner we obtain a new method for the simultaneous inclusion of all multiple zeros of a given polynomial which reads
(7.64)
where and .
Taking into account that the Li-Liao-Cheng method (7.35) has order four, we can derive the same relations as in Lemma 7.1 and by Theorem 1.5 we prove the following assertion using a similar procedure as in the proof of Theorem 7.4.
Theorem 7.6
Let be well separated and sufficiently small initial disks such that . Then the lower bound of the -order of convergence of the family of interval methods (7.64) is six.
As in Section 7.3 for simple zeros, the convergence rate of methods (7.62), (7.63) and (7.64) can be accelerated by applying the Gauss-Seidel approach using the already calculated circular approximations in the same iteration. In this manner we obtain the following single-step methods
(7.65)
(7.66)
(7.67)
Since and , according to Theorem 7.6 we immediately obtain
Theorem 7.7
The lower bounds of the -order of convergence of the single-step methods (7.65), (7.66) and (7.67) are contained in the ranges
Remark 7.4
The presented methods (7.62)–(7.67) require initial disks that contain the desired zeros and the knowledge of their multiplicities. Both tasks are of great importance in the theory of iterative interval processes. The problem of obtaining initial inclusion disks was studied, for instance, in Carstensen (1991), Herceg (1997), and Petković et al., (1995), while efficient procedures for determination of multiplicities of zeros can be found in Kravanja (1999a,b), Neumaier (1988) and Niu and Sakurai (2003).
Example 7.8
The total-step methods (7.62)–(7.64) and the single-step methods (7.65)–(7.67) have been applied for the simultaneous inclusion of multiple zeros of the polynomial
The zeros of this polynomial are with multiplicities 2, 3, 2, 2, 2, 2,3, 2, respectively. For initial disks we have selected disks , with centers
The maximal radii of the inclusion disks, produced in the first three iterative steps, are given in Tables 7.9 and 7.10.
From Tables 7.9 and 7.10 we conclude that the simultaneous methods (7.64) and (7.67), cobined with the multipoint method (7.35), give the smallest inclusion disks.
In this section we present a class of Halley-like iterative methods of high computational efficiency for the simultaneous inclusion of polynomial simple or multiple zeros. These methods are constructed by a technique based on corrections, proposed by Petković and Carstensen (1993) and Carstensen and Petković (1994). The presented family of inclusion methods can be regarded as an improved variant of the Halley-like iterative method presented in Petković (1989b) and discussed later in Petković, 1999 and Petković and Milošević (2005a, 2005b).
Let be a monic polynomial of degree with simple or multiple complex zeros , with respective multiplicities . In our consideration we will use the abbreviations
and
appears in cubically convergent Halley’s iterative method , see (1.35).
Also, we define a disk
where and are vectors whose components are disks and denotes the centered inversion of a disk (7.41).
The following zero-relation was derived in Wang and Zheng (1984):
(7.68)
Taking disks , which contain the zeros , instead of these zeros and putting in (7.68), we obtain the following inclusion,
(7.69)
where .
Let be initial disjoint disks containing the zeros , that is, for all , and let . According to the inclusion property, the following total-step method for the simultaneous inclusion of all zeros of follows from 7.69
(7.70)
The order of the iterative method (7.70) is four, see Petković (1989b).
Remark 7.6
Halley’s correction is a composite part of the iterative formula (7.70). For this reason, this method and its latter modifications are referred to as Halley-like inclusion methods.
The acceleration of convergence rate of the iterative method (7.70) using Schröder’s correction and Halley’s correction was obtained in Petković and Milošević (2011) in a similar way as in Carstensen and Petković (1994) and Petković and Carstensen (1993). To express these methods in a unique form, we use an additional superscript index for Schröder’s correction and for Halley’s correction), that is,
(7.71)
for . It was proved in Petković and Milošević (2011) that the order of convergence of the obtained improved method (7.71) is equal . Both corrections and are denoted uniquely by .
Further acceleration of the convergence rate can be obtained by using corrections arising from the family of two-point fourth-order methods for finding multiple roots of a nonlinear equation , recently developed by Zhou et al. (2011) (see (2.163)),
(7.72)
where
and φ is at least twice differentiable function which satisfies the following conditions
(7.73)
Here is the multiplicity of the wanted zero ζ of a function (not necessarily an algebraic polynomial in general). The method (7.72) will be called ZCS-method after the authors, and the corresponding correction ZCS-correction. The order of convergence of the iterative method (7.72) is four, that is,
(7.74)
holds (for the proof, see Zhou et al., 2011).
In what follows we substitute by an approximation of and by the corresponding multiplicity of . The disk approximation is replaced by calculated by
where , and φ is any function that satisfies conditions (7.73). In this manner we obtain an improved method for the simultaneous inclusion of all simple or multiple zeros of a given polynomial in the form (7.71) for , that is, we deal with the disks
Particular examples of the function φ are given later (Method 1 – Method 5).
To determine the order of convergence of the method , we introduce the abbreviations
(7.75)
(7.76)
First we prove the following assertion.
Proof
Let . Then . Using circular arithmetic operations given in Section 7.1, we obtain
(7.77)
and
(7.78)
Starting from (7.77) we find
(7.79)
Using the identity
we obtain
(7.80)
Where is a disk within square brackets in the second row of (7.80). Now the method can be written in the form
(7.81)
We have the following estimates:
(7.82)
From the difference
and (7.74) we obtain
Hence, there follows
(7.83)
and
(7.84)
According to the obtained estimates (7.82)–(7.84) and the identities
we find from (7.80)
(7.85)
(7.86)
Using (7.85) and (7.86) we get from (7.81)
and
Theorem 7.8
If are well separated and sufficiently small initial disks and , then the lower bound of the -order of convergence of the family of inclusion methods is 7.
Proof
The convergence analysis of inclusion methods with ZCS-corrections is based on the assertions of Lemma 7.2 and Theorem 1.5.
Without loss of generality, we adopt the relation , dealing with the “worst case” model. By virtue of Lemma 7.2 we observe that the sequences and behave as follows
According to these relations and Theorem 1.5 we form the -matrix
Its spectral radius is and the corresponding eigenvector is . Hence, with regard to Theorem 1.5, we obtain
The convergence rate of the methods (7.70) and (7.71) (assuming ) can be accelerated by applying Gauss-Seidel’s approach, which means that the already calculated inclusion disks are used in the same iteration as soon as they become available. In this manner, we obtain from (7.70) the single-step method
(7.87)
and from (7.71) the single-step methods with corrections
(7.88)
for and . Recall that is a vector whose components are inclusion disks already calculated in the same iteration, see the definition of the sums .
The inclusion methods (7.87) and (7.88) (for ) were studied in Petković and Milošević, 2011. It was proved there that the -order of convergence of the single-step method (7.87) is at least , where is the unique positive root of the equation
The ranges of the lower bounds of the -order of convergence of the methods (7.88) are
for the methods with Schröder’s and Halley’s corrections , respectively.
Let us examine now the single-step method (7.88) for . First, since the order of convergence of a single-step method approaches the order of the corresponding total-step method when the polynomial degree is very large, according to Theorem 7.8 we have
for very large ν.
Consider now the single-step method for and assume that
(the “worst case” model). After an extensive and tedious calculations we arrive at the following estimates
The corresponding -matrix and its spectral radius and eigenvector are:
into account the previous results, we can state the following assertion.
Theorem 7.9
The range of the lower bound of -order of convergence of the family of single-step methods is
The presented inclusion methods (7.70), (7.71), (7.87) and (7.88) have been tested in solving many polynomial equations. All methods have been realized by the use of only centered inversion. We have used different functions which satisfy conditions (7.73) to construct various methods taking
Method 1. .
Method 2. .
Method 3. .
Method 4. .
Method 5. .
The expressions for the coefficients and are given in Section 2.7.
Remark 7.7
The two-point method (7.72) with given above (Method 3) gives a special case considered in Li et al., 2009. The choice leads to he method studied in Sharma and Sharma, 2010.
Example 7.9
We have found circular inclusionapproximations to multiple zeros of the polynomial given in Example 7.8 using the same initial disks. By implementing theinterval methods (7.70), (7.71), (7.87) and (7.88) for , we obtain inclusion disks with maximal radii given in Table 7.11.
From Table 7.11 we can conclude that the theoretical value of convergence order of the considered methods mainly well coincides with the convergence rate of these methods in practice, especially in latter iterations. Very small disks obtained in the third iteration demonstrate the property of inclusion methods with corrections consisting of the growing accuracy of the resulting disk approximations.
1. McNamee JM. Numerical Methods for Roots of Polynomials. Amsterdam: Elsevier; 2007; Part I.
2. Petković MS. Iterative Methods for Simultaneous Inclusion of Polynomial Zeros. Berlin-Heidelberg-New York: Springer-Verlag; 1989.
3. Nourein AWM. An improvement on Nourein’s method for the simultaneous determination of the zeros of a polynomial (an algorithm). J Comput Appl Math. 1977;3:109–110.
4. Petković MS, Carstensen C. On some improved inclusion methods for polynomial roots with Weierstrass’ corrections. Comput Math Appl. 1993;25:59–67.
5. Petković MS, Herceg Dj, Petković I. On a simultaneous method of Newton-Weierstrass’ type for finding all zeros of a polynomial. Appl Math Comput. 2009;215:2456–2463.
6. Traub JF. Iterative Methods for the Solution of Equations. Englewood Cliffs, New Jersey: Prentice-Hall; 1964.
7. Durand E. Solution numériques des équations algébraiques, Tom I: Équations du Type F(x)=0; Racines d’un Polynôme. Paris: Masson; 1960.
8. Dochev K. Modified Newton’s method for the simultaneous approximate calculation of all roots of a given algebraic equation. Fiz.-Mat.Spis Blgar Akad Nauk. 1962;5:136–139.
9. Kerner IO. Ein Gesamtschrittverfahren zur Berechnung der Nullstellen von Polynomen. Numer Math. 1966;8:290–294.
10. Petković MS, Petković LD. On a cubically convergent derivative free root finding method. Int J Comput Math. 2007;84:505–513.
11. Petković MS. Point Estimation of Root Finding Methods. Berlin-Heidelberg: Springer; 2008.
12. Petković MS, Ilić S, Petković I. A posteriori error bound methods for the inclusion of polynomial zeros. J Comput Appl Math. 2007;208:316–330.
13. Petković MS, Ilić S, Petković I. Iteration methods for finding all zeros of a polynomial simultaneously. Math Comp. 1973;27:339–344.
14. Henrici P. Applied and Computational Complex Analysis, Vol 1: Power Series, Integration, Conformal Mapping, Location of Zeros. New York: John Wiley and Sons Inc; 1974.
15. Wilkinson JH. Rounding Errors in Algebraic Processes. New Jersey: Prentice Hall; 1963.
16. Gargantini I, Henrici P. Circular arithmetic and the determination of polynomial zeros. Numer Math. 1972;18:305–320.
17. Ehrlich LW. A modified Newton method for polynomials. Comm ACM. 1967;10:107–108.
18. Nourein AWM. An improvement on two iteration methods for simultaneously determination of the zeros of a polynomial. Int J Comput Math. 1977;6:241–252.
19. Petković MS, Petković LD. Families of optimal multipoint methods for solving nonlinear equations: a survey. Appl Anal Discrete Math. 2010;4:1–22.
20. Milovanović GV, Petković MS. On computational efficiency of the iterative methods for the simultaneous approximation of polynomial zeros. ACM Trans Math Software. 1986;12:295–306.
21. Brent R, Zimmermann P. Modern Computer Arithmetic. Cambridge: Cambridge University Press; 2011.
22. L. Fousse, G. Hanrot, V. Lefèvre, P. Pèlissier, P. Zimmermann, MPFR: a multiple-precision binary floating-point library with correct rounding, ACM Trans. Math. Software 33 (2007), article 13.
23. Schönhage A, Strassen V. Schnelle Multiplikation grosser Zahlen. Computing. 1971;7:281–292.
24. J. Fujimoto, T. Ishikawa, D. Perret-Gallix, High precision numerical computations, Technical report, ACCP-N-1, May 2005.
25. Weerakoon S, Fernando TGI. A variant of Newton’s method with accelerated third-order convergence. Appl Math Lett. 2000;13:87–93.
26. Li S, Liao X, Cheng L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl Math Comput. 2009;215:1288–1292.
27. Zhou X, Chen X, Song Y. Construction of higher order methods for multiple roots of nonlinear equations. J Comput Appl Math. 2011;235:4199–4206.
28. Gargantini I. Further application of circular arithmetic: Schröder-like algorithms with error bound for finding zeros of polynomials. SIAM J Numer Anal. 1978;15:497–510.
29. Milovanović GV, Petković MS. On the convergence order of a modified method for simultaneous finding polynomial zeros. Computing. 1983;30:171–178.
30. Petković MS. On the Halley-like algorithms for the simultaneous approximation of polynomial complex zeros. SIAM J Numer Anal. 1989;26:740–763.
31. Wang D, Wu Y. Some modifications of the parallel Halley iteration method and their convergence. Computing. 1987;38:75–87.
32. Alefeld G, Herzberger J. Introduction to Interval Computation. New York: Academic Press; 1983.
33. Petković MS, Petković LD. Complex Interval Arithmetic and Its Applications. Berlin-Weinheim-New York: Wiley-VCH; 1998.
34. Carstensen C, Petković MS. An improvement of Gargantini’s simultaneous inclusion method for polynomial roots by Schroeder’s correction. Appl Numer Math. 1994;13:453–468.
35. Petković I. Computational aspects of the implementation of disk inversions. Reliab Comput. 2011;15:81–90.
36. Petković MS, Milošević MR, Milošević DM. New higher-order methods for the simultaneous inclusion of polynomial zeros. Numer Algorithms. 2011;58:179–201.
37. Carstensen C. Anwendungen von Begleitmatrizen. Z Angew Math Mech. 1991;71:809–812.
38. Dj. D. Herceg, Computer implemention and interpretation of iterative methods for solving equations (in Serbian), Master theses, University of Novi Sad, Novi Sad, 1997.
39. Petković MS, Carstensen C, Trajković M. Weierstrass’ formula and zero-finding methods. Numer Math. 1995;69:353–372.
40. P. Kravanja, On computing zeros of analytic functions and related problems in structured numerical linear algebra, Ph. D. Thesis, Katholieke Universiteit Leuven, Lueven, 1999.
41. Kravanja P. A modification of Newton’s method for analytic mappings having multiple zeros. Computing. 1999;62:129–145.
42. Neumaier A. An existence test for root clusters and multiple roots. Z Angew Math Mech. 1988;68:256–257.
43. Niu XM, Sakurai T. A method for finding the zeros of polynomials using a companion matrix. Japan J Indust Appl Math. 2003;20:239–256.
44. Petković MS. Halley-like method with corrections for the inclusion of polynomial zeros. Computing. 1999;62:69–88.
45. Petković MS, Milošević DM. Improved Halley-like methods for the inclusion of polynomial zeros. Appl Math Comput. 2005;169:417–436.
46. Petković LD. A note on the evaluation in circular arithmetic. Z Angew Math Mech. 1986;66:371–373.
47. Wang X, Zheng S. A family of parallel and interval iterations for finding simultaneously all roots of a polynomial with rapid convergence (I). Comput Math. 1984;4:70–76.
48. Petković MS, Milošević DM. Higher order methods for the inclusion of multiple zeros of polynomials. Reliab Comput. 2011;15:91–108.
49. Sharma JR, Sharma R. Modified Jarratt method for computing multiple roots. Appl Math Comput. 2010;217:878–881.