LOSS OF SIGNIFICANCE

LOSS OF SIGNIFICANCE

Loss of significance is an undesirable effect in calculations using finite-precision arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two nearly equal numbers (known as catastrophic cancellation). The effect is that the number of significant errors in the result is reduced unacceptably. Ways to avoid this effect are studied in numerical errors.

Demonstration of the problem

The effect can be demonstrated with decimal numbers. The following example demonstrates loss of significance for a decimal floating-point data type with 10 significant digits:

Consider the decimal number

   0.1234567891234567890

A floating-point representation of this number on a machine that keeps 10 floating-point digits would be

   0.1234567891

which is fairly close when measuring the error as a percentage of the value. It is very different when measured in order of precision. The first is accurate to 10×10−20, while the second is only accurate to 10×10−10.

Now perform the calculation

   0.1234567891234567890 − 0.1234567890000000000

The answer, accurate to 20 significant digits, is

   0.0000000001234567890

However, on the 10-digit floating-point machine, the calculation yields

   0.1234567891 − 0.1234567890 = 0.0000000001

In both cases the result is accurate to same order of magnitude as the inputs (-20 and -10, respectively). In the second case, the answer seems to have one significant digit, which would amount to loss of significance. However, in computer floating point arithmetic, all operations can be viewed as being performed on antilogarithms, for which the rules for significant errors indicate that the number of significant figures remains the same as the smallest number of significant figures in the mantissas. The way to indicate this and represent the answer to 10 significant figures is:

   1.000000000×10−10