In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a real-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root.

The Newton-Raphson method is one of the most widely used methods for root finding. It can be easily generalized to the problem of finding solutions of a system of non-linear equations, which is referred to as Newton's technique. Moreover, it can be shown that the technique is quadratically convergent as we approach the root. Unlike the bisection and false position methods, the Newton-Raphson (N-R) technique requires only one initial value x, which we will refer to as the initial guess for the root

History Edit

Newton's method was described by Isaac Newton in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, his description differs substantially from the modern description given above: Newton applies the method only to polynomials. He does not compute the successive approximations xn, but computes a sequence of polynomials and only at the end, he arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and fails to notice the connection with calculus. Isaac Newton probably derived his method from a similar but less precise method byVieta. The essence of Vieta's method can be found in the work of the Persian mathematician, Sharaf al-Din al-Tusi, while his successor Jamshīd al-Kāshīused a form of Newton's method to solvexP −N = 0 to find roots of N (Ypma 1995). A special case of Newton's method for calculating square roots was known much earlier and is often called the Babylonian method.

Newton's method was used by 17th century Japanese mathematician Seki Kōwa to solve single-variable equations, though the connection with calculus was missing.

Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson again viewed Newton's method purely as an algebraic method and restricted its use to polynomials, but he describes the method in terms of the successive approximations xninstead of the more complicated sequence of polynomials used by Newton. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using fluxional calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.

Arthur Cayley in 1879 in The Newton-Fourier imaginary problem was the first who noticed the difficulties in generalizing the Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions.



Figure 1

The Newton-Raphson method is based on the principle that if the initial guess of the root of $ f(x)=0 $ is at $ x_i $ , then if one draws the tangent to the curve at $ f(x_i) $ , the point $ x_{i+1} $ where the tangent crosses the x-axis is an improved estimate of the root (Figure 1). Using the definition of the slope of a function, at $ x = x_i $

$ f'(x)=tan\theta $
$ =\frac{f(x_i)-0}{x_i - x_{i+1}} $

which gives

$ x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)} $ (1)

Equation (1) is called the Newton-Raphson formula for solving nonlinear equations of the form $ f(x)=0 $ . So starting with an initial guess,$ x_i $ , one can find the next guess,$ x_{i+1} $ , by using Equation (1). One can repeat this process until one finds the root within a desirable tolerance.


The steps of the Newton-Raphson method to find the root of an equation $ f(x)=0 $ are

1. Evaluate $ f'(x)' $ symbolically

2. Use an initial guess of the root, $ x_i $ , to estimate the new value of the root,$ x_{i+1} $ , as

$ x_{i+1} = x_i - \frac{f(x_i}{f'(x_i)} $

3. Find the absolute relative approximate error as |∈$ _\alpha $| as

|∈$ _\alpha = |\frac{x_{i+1} - x_i}{x_{i+1}}| x 100 $

4. Compare the absolute relative approximate error with the pre-specified relative error tolerance,∈$ _\alpha $ . If |∈$ _\alpha $| > ∈$ _s $ , then go to Step 2, else stop the algorithm. Also, check if the number of iterations has exceeded the maximum number of iterations allowed. If so, one needs to terminate the algorithm and notify the user.

Advantages and DisadvantagesEdit

  • The method is very expensive - It needs the function evaluation and then the derivative evaluation.
  • If the tangent is parallel or nearly parallel to the x-axis, then the method does not converge.
  • Usually Newton method is expected to converge only near the solution.
  • The advantage of the method is its order of convergence is quadratic.
  • Convergence rate is one of the fastest when it does converge.


Newton's method is an extremely powerful technique—in general the convergence is quadratic: the error is essentially squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.

  1. Newton's method requires that the derivative be calculated directly. In most practical problems, the function in question may be given by a long and complicated formula, and hence an analytical expression for the derivative may not be easily obtainable. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two points on the function. In this case, the Secant method results. This has slightly slower convergence than Newton's method but does not require the existence of derivatives.
  2. If the initial value is too far from the true zero, Newton's method may fail to converge. For this reason, Newton's method is often referred to as a local technique. Most practical implementations of Newton's method put an upper limit on the number of iterations and perhaps on the size of the iterates.
  3. If the derivative of the function is not continuous the method may fail to converge.
  4. It is clear from the formula for Newton's method that it will fail in cases where the derivative is zero. Similarly, when the derivative is close to zero, the tangent line is nearly horizontal and hence may overshoot the desired root.
  5. If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent.
  6. Newton's method works best for functions with low curvature. For linear functions with zero curvature, Newton's method will find the root after a single iteration.

Since the most serious of the problems above is the possibility of a failure of convergence, Press et al. (1992) presented a version of Newton's method that starts at the midpoint of an interval in which the root is known to lie and stops the iteration if an iterate is generated that lies outside the interval.

Developers of large scale computer systems involving root finding tend to prefer the secant method over Newton's method because the use of a difference quotient in place of the derivative in Newton's method implies that the additional code to compute the derivative need not be maintained. In practice, the advantages of maintaining a smaller code base usually outweigh the superior convergence characteristics of Newton's method.


Multidimensional root finding method such as Newton-Raphson method can be used.The approach is to linearize around an approximate solution, say from iteration k, then solve four linear equations derived from the quadratic equations above to obtain . The Newton-Raphson method is more rapidly convergent than other methods of numerical root finding. A disadvantage of this multidimensional root finding method as compared to single dimensional root findiing is that, "There are no good general methods for solving systems of more than one nonlinear equations." It is also applied to establish a relationship between position, velocity and acceleration.

In the business world, if you can figure out the equation of a curve (which you can with Excel or almost any spreadsheet) you can find the values for all of the maximum and minimum using Newton-Raphson Method (that would be where the derivative in equal to zero) so that you can determine profit and loss in the company.


You are working for ‘DOWN THE TOILET COMPANY’ that makes floats for ABC commodes. The floating ball has a specific gravity of 0.6 and has a radius of 5.5 cm. You are asked to find the depth to which the ball is submerged when floating in water.


Figure 2 Floating ball problem.

The equation that gives the depth x in meters to which the ball is submerged under water is given by x^3 - 0.165x^2 +3.993x10^-4=0

Use the Newton-Raphson method of finding roots of equations to find

a) the depth x to which the ball is submerged under water. Conduct three iterations to estimate the root of the above equation.

b) the absolute relative approximate error at the end of each iteration, and

c) the number of significant digits at least correct at the end of each iteration.

Example video clip:


Community content is available under CC-BY-SA unless otherwise noted.