Friday, February 4, 2011

Numerical stability and well-posed problems

 Numerical stability

A related phenomenon is instability. Typically, algorithms would approach the right solution in the limit, if there were no round-off or truncation errors, but depending on the specific computational method small errors can be magnified, instead of damped, leading to large errors.
Sometimes a single calculation can be achieved in several ways, all of which are algebraically equivalent in terms of ideal real or complex numbers, but in practice when performed on digital computers yield different results. Some calculations might damp out approximation errors that occur; others might magnify such errors. Calculations that can be proven not to magnify approximation errors are called numerically stable. One of the common tasks of numerical analysis is to try to select algorithms which are robust — that is to say, have good numerical stability among other desirable properties.
 numericals problems
Numerical stability is an important notion in numerical analysis. An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is ill-conditioned, then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned and/or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with an initial approximation x1 to \sqrt{2}, for instance x1=1.4, and then computing improved guesses x2, x3, etc.. One such method is the famous Babylonian method, which is given by xk+1 = xk/2 + 1/xk. Another iteration, which we will call Method X, is given by xk + 1 = (xk2−2)2 + xk. We have calculated a few iterations of each scheme in table form below, with initial guesses x1 = 1.4 and x1 = 1.42.
Babylonian Babylonian Method X Method X
x1 = 1.4 x1 = 1.42 x1 = 1.4 x1 = 1.42
x2 = 1.4142857... x2 = 1.41422535... x2 = 1.4016 x2 = 1.42026896
x3 = 1.414213564... x3 = 1.41421356242... x3 = 1.4028614... x3 = 1.42056...


... ...


x1000000 = 1.41421... x28 = 7280.2284...
Observe that the Babylonian method converges fast regardless of the initial guess, whereas Method X converges extremely slowly with initial guess 1.4 and diverges for initial guess 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.
Numerical stability is affected by the number of the significant digits the machine keeps on, if we use a machine that keeps on the first four floating-point digits,a good example on loss of significance these two equivalent functions
 f(x)=x(\sqrt{x+1}-\sqrt{x}).\text{and      , } g(x)=\frac{x}{(\sqrt{x+1}+\sqrt{x})}
if we compare the results of
 f(500)=500(\sqrt{501}-\sqrt{500})=500(22.3830-22.3607)=500(0.0223)=11.1500
and
\begin{alignat}{3}g(500)&=\frac{500}{\sqrt{501}+\sqrt{500}}\\      &=\frac{500}{22.3830+22.3607}\\      &=\frac{500}{44.7437}=11.1748\end{alignat}
by looking to the two above results, we realize that loss of significance which is also called Subtractive Cancelation has a huge effect on the results, even though both functions are equivalent; to show that they are equivalent simply we need to start by f(x) and end with g(x), and so
 \begin{alignat}{4}f(x)&=x(\sqrt{x+1}-\sqrt{x})\\    & =x(\sqrt{x+1}-\sqrt{x})\frac{(\sqrt{x+1}+\sqrt{x})}{(\sqrt{x+1}+\sqrt{x})}\\    &=x\frac{((\sqrt{x+1})^2-(\sqrt{x})^2)}{(\sqrt{x+1}+\sqrt{x})}    &=\frac {x}{(\sqrt{x+1}+\sqrt{x})}\end{alignat}
the true value for the result is 11.174755...
which is exactly g(500)=11.1748 after rounding the result to 4 decimal digits
now imagine that you use tens of terms like these functions in your program, your error will increase as you proceed in the program, unless you use the suitable formula of the two functions each time you evaluate either f(x), or g(x), the choice is dependent on the parity of x .
  • The example is taken from Mathew; Numerical methods using matlab , 3rd ed.
  • Example

    As an example of an unstable algorithm, consider the task of adding an array of 100 numbers. To simplify things, assume our computer only has two digits of precision (for example, numbers can be represented as 2.3, 77, 100, 110, 120, etc., but not 12.3 or 177).
    The obvious way to do this would be the following:
    function sumArray(array) is
    let theSum = 0
    for each element in array do
    let theSum = theSum + element
    end for
    return theSum
    end function
    That looks reasonable, but suppose the first element in the array was 1.0 and the other 99 elements were 0.01. In exact arithmetic, the answer would be 1.99. However, on our two-digit computer, once the 1.0 was added into the sum variable, adding in 0.01 would have no effect on the sum, and so the final answer would be 1.0 – not a very good approximation of the real answer. Furthermore, we see that the algorithm depends on the ordering of elements within the array, in contrast to the exact arithmetic.
    A stable algorithm would first sort the array by the absolute values of the elements in ascending order. This ensures that the numbers closest to zero will be taken into consideration first. Once that change is made, all of the 0.01 elements will be added, giving 0.99, and then the 1.0 element will be added, yielding a rounded result of 2.0 – a much better approximation of the real result.

     

     

No comments:

Post a Comment