The Robustness In Geometric Computation Computer Science Essay

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Gang Mei

In this chapter, the algorithmic robustness issue in geometric computation, including numerical robustness and geometrical robustness will be addressed. The main content will focus on two questions: the first is how non-robustness origins and the second is how to deal with the problematic robustness issue in geometric computation.

Robustness is accepted simply to refer to programs’, in fact the underlying algorithms’, capacity to solve the problems in numeric computations or geometrical configurations that are in some way difficult to handle. A robust algorithm / program does not crash or jump into infinite loops because of numeric errors, and always returns consistent results for equivalent inputs and deals with degenerate cases in an expected way.

As one type of geometric modeling systems, 3D geological modeling system is also definitely sensitive to robustness problems due to the numerical and the geometrical nature of adopted algorithms. The 3D geological modeling system will not be reliable without considering the robustness issue in geometric computation. This chapter discusses the numerical origins of these problems and practical approaches of minimizing or even fully avoiding them.

1.1 Robustness Problem Types

In general, the robustness problems can be classified into two categories according to their causes: those due to precision inaccuracies and those due to degenerations. Precision problems are caused by the fact that calculations on a computer are not performed with exact real arithmetic but limited-precision floating-point or integer arithmetic. Degenerations are the special cases or conditions that are usually not considered during the generic implementation of algorithms.

Often algorithms are designed under the assumptions that all computations are performed in exact real arithmetic and that no degenerations exist in the data operated on. Unfortunately, in reality, these assumptions are wrong. First, computations — performed predominantly using floating-point arithmetic — are not exact, but suffer from, for example, rounding errors. Second, degenerate situations cannot completely avoid and occur in many cases.

The presence of problems from either category can give rise to, for example, conflicting results from different computations, incorrect answers, and even algorithms crashing or getting stuck in infinite loops. Numerical imprecision originates from several identifiable sources.

1. Conversion and representation errors. Many errors stem from decimal numbers, in general, not being exactly representable as binary numbers, which is the typical radix used in fixed-point or floating-point arithmetic. Irrational numbers, such as√2, are not representable in any (integral) number base, further compounding the problem.

2. Overflow and underflow errors. Numbers that become too large to be expressed in the given representation give rise to overflow errors (or underflow errors when the numbers are too small). These errors typically occur in multiplying two large (or small) numbers or dividing a large number by a small one (or vice versa).

3. Round-off errors. Consider the multiplication operation: in multiplying two real numbers the product c=ab requires higher precision to represent than either a or b themselves. When the exact representation of c requires more than the n bits the representation allows, c must be rounded or truncated to n bits and consequently will be inexact in the last representable digit. Most operations cause round-off errors, not just multiplication.

4. Digit-cancellation errors. Cancellation errors occur when nearly equal values are subtracted or when a small value is added or subtracted from a large one.

5. Input errors. Errors may also stem from input data. These errors may be modeling errors, measurement errors, or errors due to previous inexact calculations.

Numeric non-robustness would never arise if all computations are carried out using exact real arithmetic, rather than limited-precision computer arithmetic. However, this is just a wonderful dream. To fully understand and ultimately address robustness problems in geometric computation, it is important to have a good understanding of the inherent peculiarities and restrictions of limited-precision computer arithmetic. The remainder of this chapter focuses particularly on the floating-point representation of real numbers and some measures that allow for robust usage of floating-point arithmetic.

Real number

What is real numbers? In mathematics, a real number is a value that represents a quantity along a continuous line. The real numbers include all the rational numbers, such as the integer −5 and the fraction 3/4, and all the irrational numbers such as √2 and π. Real numbers can be thought of as points on an infinitely long line called the number line or real line, where the points corresponding to integers are equally spaced.

Any real number can be determined by a possibly infinite decimal representation such as that of 8.632, where each consecutive digit is measured in units one tenth the size of the previous one. The real line can be thought of as a part of the complex plane, and correspondingly, complex numbers include real numbers as a special case.

The reals are uncountable, that is, while both the set of all natural numbers and the set of all real numbers are infinite sets, there can be no one-to-one function from the real numbers to the natural numbers.

A real number can be any positive or negative number. This includes all integers and all rational and irrational numbers. Rational numbers may be expressed as a fraction (such as 7/8) and irrational numbers may be expressed by an infinite decimal representation (3.1415926535...). Real numbers that include decimal points are also called floating point numbers, since the decimal "floats" between the digits.

Real numbers are relevant to computing because computer calculations involve both integer and floating point calculations. Since integer calculations are generally simpler than floating point calculations, a computer's processor may use a different type of logic for performing integer operations than it does for floating point operations. The floating point operations may be performed by a separate part of the CPU called the floating point unit, or FPU.

While computers can process all types of real numbers, irrational numbers (those with infinite decimal points) are generally estimated. For example, a program may limit all real numbers to a fixed number of decimal places. This helps save extra processing time, which would be required to calculate numbers with greater, but unnecessary accuracy.

What Are Floating Point Numbers?

There are several ways to represent real numbers on computers. Fixed point places a radix point somewhere in the middle of the digits, and is equivalent to using integers that represent portions of some unit. For example, one might represent 1/100ths of a unit; if you have four decimal digits, you could represent 10.82, or 00.01. Another approach is to use rationals, and represent every number as the ratio of two integers.

Floating-point representation basically represents reals in scientific notation. Scientific notation represents numbers as a base number and an exponent. For example, 123.456 could be represented as 1.23456 × 102.

Floating-point solves a number of representation problems. Fixed-point has a fixed window of representation, which limits it from representing very large or very small numbers. Also, the fixed-point is prone to a loss of precision when two large numbers are divided.

11.2 Real Numbers Representation

This section tries to give the answer of a question — how to represent real numbers on computers? Although there is an infinite number of real numbers, on a computer representations of real numbers must be finite in size. Consequently, a given representation will not be able to represent all real numbers exactly. Those real numbers not directly representable in the computer representation are approximated by the nearest representable number. The real numbers that are exactly expressible are called machine representable numbers (or just machine numbers).

The formal definition of representabel numbers are as follows: Representable numbers are numbers capable of being represented exactly in a given binary number format.[1] The format could can either be an integer or floating point numbers. Representable numbers are often visually depicted on a number line.

When talking about computer representations of real numbers it is important to distinguish between the precision and the accuracy with which a number is given. Precision specifies how many digits (binary, decimal, or otherwise) there are in the representation. Accuracy concerns correctness and is related to how many digits of an approximation are correct. Thus, it is not unreasonable for an inaccurate value to be given precisely. For example, at 10 significant decimal digits, 3.123456789 is quite a precise but very inaccurate approximation (two correct digits) of π (3.14159...).

There are two common computer representations for real numbers: fixed-point numbers and floating-point numbers. Fixed-point numbers allocate a set number of digits for the integer and the fractional part of the real number. A 32-bit quantity, for example, may hold 16 bits for both the integer and fractional parts (a so-called 16.16 fixed-point number) or hold 8 bits for the integer part and 24 bits for the fractional part (an 8.24 fixed-point number). Given a real numberr, the corresponding fixed-point number with n bits of fraction is normally represented as the integer part of 2 n r.

For example, 1/3 can be represented as the 16.16 fixed-point number 21846 (2 16 /3) or the 8.24 fixed-point number 5592405 (2 24 /3). As a consequence, fixed-point numbers are equally spaced on the number line (Figure 11.1).With fixed-point numbers there is a trade-off between the range over which the numbers may span and the precision with which the fractional part may be given. If numbers are known to be in a limited range around zero, fixed-point numbers allow high fractional precision (and thus potentially high accuracy).

In contrast, the decimal point of a floating-point number is not fixed at any given positionbut is allowed to"float"as needed.The floating of the decimal point is accom-plished by expressing numbers in anormalizedformat similar to scientific notation. In scientific notation, decimal numbers are written as a numberc(the coefficient), with an absolute value in the range 1≤|c|<10multiplied by 10 to some (signed) integer power. For example: 8174.12=8.17412·10 3 −12.345=−1.2345·10 1 0.0000724=7.24·10 −

Floating-point representations are generally not decimal but binary. Thus, (nonzero) numbers are instead typically given in the form±(1.f)∗2 e. Here,fcorre-sponds to a number of fractional bits andeis called theexponent. For example,−8.75 would be represented as−1.00011∗2 3 . The 1.f part is referred to as themantissaor significand. Thef termalone is sometimes also called thefraction. For this format, the number 0 is handled as a special case (as discussed in the next section).

Floating-point values are normalized to derive several benefits. First, eachnumber is guaranteed a unique representation and thus there is no confusion whether 100 should be represented as, say, 10·10 1 ,1·10 2 , or 0.1·10 3. Second, leading zeroes in a number such as 0.00001 do not have to be represented, giving more precision for small numbers. Third, as the leading bit before the fractional part is always 1, it does not have to be stored, giving an extra bit of mantissa for free.

Unlike fixed-point numbers, for floating-point numbers the density of representable numbers varies with the exponent, as all (normalized) numbers are represented using the same number of mantissa bits. On the number line, the spacing between representable numbers increases the farther away from zero a number is located (Figure 11.2). Specifically, for a binary floating-point representation the number range corresponding to the exponent ofk+1 has a spacing twice that of

the number range corresponding to the exponent ofk. Consequently, the density of floating-point numbers is highest around zero. (The exception is a region immediately surrounding zero, where there is a gap. This gap is caused by the normalization fixing the leading bit to 1, in conjunction with the fixed range for the exponent.) As an example, for the IEEE single-precision format described in the next section there are as many numbers between 1.0 and 2.0 as there are between 256.0 and 512.0. There are also many more numbers between 10.0 and 11.0 than between 10000.0 and 10001.0 (1048575 and 1023, respectively).

Compared to fixed-point numbers, floating-point numbers allow a much larger range of values to be represented at the same time good precision is maintained for small near-zero numbers. The larger range makes floating-point numbers more convenient to work with than fixed-point numbers. It is still possible for numbers to become larger than can be expressed with a fixed-size exponent. When the floating-point number becomes too large it is said to haveoverflowed. Similarly, when the number becomes smaller than can be represented with a fixed-size exponent it is said to have underflowed. Because a fixed number of bits is always reserved for the exponent, floating-point numbers may be less precise than fixed-point numbers for certain number ranges. Today, with the exception of some handheld game consoles all home computers and game consoles have hardware-supported floating-point arithmetic, with speedsmatching and often exceeding that of integer and fixed-point arithmetic.

As will be made clear in the next couple of sections, floating-point arithmetic does not have the same properties as has arithmetic on real numbers, which can be quite surprising to the uninitiated. To fully understand the issues associated with using floating-point arithmetic, it is important to be familiar with the representation and its properties. Today, virtually all platforms adhere (or nearly adhere) to the IEEE-754 floating-point standard, discussed next.

Figure 11.1 Fixed-point numbers are equally spaced on the number line.

Figure 11.2 Floating-point numbers are not evenly spaced on the number line. They are denser around zero (except for a normalization gap immediately surrounding zero) and become more and more sparse the farther from zero they are. The spacing between successive representable numbers doubles for each increase in the exponent.

11.2.1The IEEE-754 Floating-point Formats

The IEEE-754 standard, introduced in 1985, is today the de facto floating-point standard. It specifies two basic binary floating-point formats: single-precision and double-precision floating-point numbers.

Figure 11.3 The IEEE-754 single-precision (top) and double-precision (bottom) floating-point formats.

In the single-precision floating-point format, floating-point numbers are at the binary level 32-bit entities consisting of three parts: 1 sign bit (S), 8 bits of exponent (E), and 23 bits of fraction (F) (as illustrated in Figure 11.3). The leading bit of the mantissa, before the fractional part, is not stored because it is always 1. This bit is therefore referred to as the hidden or implicit bit. The exponent is also not stored directly as is. To represent both positive and negative exponents, a bias is added to the exponent before it is stored. For single-precision numbers this bias is +127.The stored exponents 0 (all zeroes) and 255 (all ones) are reserved to represent special values (discussed ahead). Therefore, the effective range forEbecomes−126≤E≤127. All in all, the value V of a stored IEEE-754 single-precision number is therefore

The IEEE-754double-precisionformat is a 64-bit format. It consists of 1 sign bit,

11 bits of exponent, and 52 bits of fraction (Figure 11.3). The exponent is given with

a bias of 1023. The value of a stored IEEE-754 double-precision number is therefore

With their 8-bit exponent, IEEE-754 single-precision numbers can represent num-bers of absolute value in the range of approximately 10−38to 10 38. The 24-bit significand means that these numbers have a precision of six to nine decimal dig-its. The double-precision format can represent numbers of absolute value between approximately 10 −308 and 10 308, with a precision of 15 to 17 decimal digits. Table 11.1 outlines the representation for a few single-precision floating-point numbers.

Thanks to thebias, floating-pointnumbershave theuseful property that comparing

the bit patterns of twopositive floating-point numbers as if theywere two 32-bit inte-gers will give the same Boolean result as a direct floating-point comparison. Positive floating-point numbers can therefore be correctly sorted on their integer representa-tion. However, when negative floating-point numbers are involved a signed integer comparison would order the numbers as follows:

−0.0<−1.0<−2.0<...<0.0<1.0<2.0<

If desired, thisproblemcanbeovercomeby inverting the integer rangecorrespond-ing to the negative floating-point numbers (by treating the float as an integer in some machine-specific manner) before ordering the numbers and afterward restoring the floating-point values through an inverse process.

Note that the number 0 cannot be directly represented by the expressionV=(−1) S ∗(1.F)∗2 E−127. In fact, certain bit strings have been set aside in the representa-tion to correspond to specific values, such as zero. These special values are indicated by the exponent field being 0 or 255. The possible values are determined as described inTable 11.2.

Using the normalized numbers alone, there is a small gap around zero when looking at the possible representable numbers on the number line. To fill this gap, the IEEE standard includesdenormalized(orsubnormal) numbers, an extra range of small numbers used to represent values smaller than the smallest possible normalized number (Figure 11.4).

The IEEE standard also defines the special values plus infinity, minus infinity,and not-a-number (+INF, −INF, and NaN, respectively). These values are further discussed in Section 11.2.2. The signed zero (−0) fills a special role to allow, for example, the correct generation of+INFor−INFwhen dividing by zero. The signed zero compares equal to 0; that is−0=+0 is true.

An important detail about the introduction and widespread acceptance of the IEEE standard is its strict accuracy requirements of the arithmetic operators. In short, the standard states that the floating-point operations must produce results that are correct inall bits. If the result isnotmachine representable(that is, directly representable in the IEEE format), the result must be correctly rounded (with minimal change in value) with respect to the currently set rounding rule to either of the two machine-representable numbers surrounding the real number.

For rounding to be performed correctly, the standard requires that three extra bits are used during floating-point calculations: twoguard bitsand onesticky bit. The two guard bits act as two extra bits of precision, effectivelymaking themantissa 26 rather than 24 bits. The sticky bit becomes set if one of the omitted bits beyond the guard bits would become nonzero, and is used to help determine the correctly rounded result when the guard bits alone are not sufficient. The three extra bits are used only internally during computations and are not stored.

The standard describes four rounding modes:round toward nearest (rounding toward the nearest representable number, breaking ties by rounding to the even number),round toward negative infinity(always rounding down),round toward positive infinity (always rounding up), and round toward zero(truncating, or chopping, the extra bits). Table 11.3 presents examples of the rounding of some numbers under these rounding modes.

The error introduced by rounding is bounded by the value known as themachine epsilon(orunit round-off error).Themachine epsilon is the smallest nonzeromachine-representable number that added to 1 does not equal 1. Specifically, iffl(x) denotes the mapping of the real numberxto the nearest floating-point representation it can be shown that

whereuis the machine epsilon. For single precision, machine epsilon is 2

−23 ≈ 1.1921·10 −7 , and for double precision 2 −52 ≈2.2204·10 −16 . These values are in C and C++ defined as the constantsFLT_EPSILONandDBL_EPSILON, respectively. They can be found in<float.h>for C and in<climits>for C++. Themachine epsilon can be used to estimate errors in floating-point calculations. An in-depth presentation of floating-point numbers and related issues is given in [Goldberg91]. In the following, floating-point numbers are assumed IEEE-754 single-precision numbers unless otherwise noted.

IEEE Standard 754 Floating Point Numbers

IEEE Standard 754 floating point is the most common representation today for real numbers on computers. This section gives a brief overview of IEEE floating point and its representation.

Storage Layout

IEEE floating point numbers have three basic components: the sign, the exponent, and the mantissa. The mantissa is composed of the fraction and an implicit leading digit. The exponent base is implicit and need not be stored. The following figure shows the layout for the single, double, and double-extended precision floating-point values. The number of bits for each field are shown and listed in the table.?.

The Sign Bit

The sign bit is as simple as it gets. 0 denotes a positive number; 1 denotes a negative number. Flipping the value of this bit flips the sign of the number.

The Exponent

The exponent field needs to represent both positive and negative exponents. To do this, a bias is added to the actual exponent in order to get the stored exponent. For IEEE single-precision floats, this value is 127. Thus, an exponent of zero means that 127 is stored in the exponent field. For double precision, the exponent field is 11 bits, and has a bias of 1023.

The Mantissa

The mantissa, also known as the significand, represents the precision bits of the number. It is composed of an implicit leading bit and the fraction bits. It is obvious that any number can be expressed in scientific notation in many different ways. In order to maximize the quantity of representable numbers, floating-point numbers are typically stored in normalized form. This basically puts the radix point after the first non-zero digit. In normalized form, five is represented as 5.0 × 100.

Ranges of Floating-Point Numbers

Let's consider single-precision floats for a second. Note that we're taking essentially a 32-bit number and re-jiggering the fields to cover a much broader range. Something has to give, and it's precision. For example, regular 32-bit integers, with all precision centered around zero, can precisely store integers with 32-bits of resolution. Single-precision floating-point, on the other hand, is unable to match this resolution with its 24 bits. It does, however, approximate this value by effectively truncating from the lower end. For example:

11110000 11001100 10101010 00001111 // 32-bit integer

= +1.1110000 11001100 10101010 x 231 // Single-Precision Float

= 11110000 11001100 10101010 00000000 // Corresponding Value

This approximates the 32-bit value, but doesn't yield an exact representation. On the other hand, besides the ability to represent fractional components (which integers lack completely), the floating-point value can represent numbers around 2127, compared to 32-bit integers maximum value around 232.

The range of positive floating point numbers can be split into normalized numbers (which preserve the full precision of the mantissa), and denormalized numbers (discussed later) which use only a portion of the fractions's precision.

Since the sign of floating point numbers is given by a special leading bit, the range for negative numbers is given by the negation of the above values.

There are five distinct numerical ranges that single-precision floating-point numbers are not able to represent:

Negative numbers less than -(2-2-23) × 2127 (negative overflow)

Negative numbers greater than -2-149 (negative underflow)

Zero

Positive numbers less than 2-149 (positive underflow)

Positive numbers greater than (2-2-23) × 2127 (positive overflow)

Overflow means that values have grown too large for the representation, much in the same way that you can overflow integers. Underflow is a less serious problem because is just denotes a loss of precision, which is guaranteed to be closely approximated by zero.

Here's a table of the effective range (excluding infinite values) of IEEE floating-point numbers:

Special Values

11.2.2 Infinity Arithmetic

In addition to allowing representation of real numbers, the IEEE-754 standard also introduces the quasi-numbers negative infinity, positive infinity, and NaN 436 Chapter 11Numerical Robustness (not-a-number, sometimes also referred to as indeterminate or indefinite). Arithmetic on this extended domain is referred to as infinity arithmetic. Infinity arithmetic is noteworthy in that its use often obviates the need to test for and handle special-case exceptions such as division-by-zero errors. Relying on infinity arithmetic does, how-ever, mean paying careful attention to details and making sure the code does exactly the right thing in every case

IEEE reserves exponent field values of all 0s and all 1s to denote special values in the floating-point scheme.

Zero

As mentioned above, zero is not directly representable in the straight format, due to the assumption of a leading 1 (we'd need to specify a true zero mantissa to yield a value of zero). Zero is a special value denoted with an exponent field of zero and a fraction field of zero. Note that -0 and +0 are distinct values, though they both compare as equal.

Denormalized

If the exponent is all 0s, but the fraction is non-zero (else it would be interpreted as zero), then the value is a denormalized number, which does not have an assumed leading 1 before the binary point. Thus, this represents a number (-1)s × 0.f × 2-126, where s is the sign bit and f is the fraction. For double precision, denormalized numbers are of the form(-1)s × 0.f × 2-1022. From this you can interpret zero as a special type of denormalized number.

Infinity

The values +infinity and -infinity are denoted with an exponent of all 1s and a fraction of all 0s. The sign bit distinguishes between negative infinity and positive infinity. Being able to denote infinity as a specific value is useful because it allows operations to continue past overflow situations. Operations with infinite values are well defined in IEEE floating point.

Not A Number

The value NaN (Not a Number) is used to represent a value that does not represent a real number. NaN's are represented by a bit pattern with an exponent of all 1s and a non-zero fraction. There are two categories of NaN: QNaN (Quiet NaN) and SNaN (Signalling NaN).

A QNaN is a NaN with the most significant fraction bit set. QNaN's propagate freely through most arithmetic operations. These values pop out of an operation when the result is not mathematically defined.

An SNaN is a NaN with the most significant fraction bit clear. It is used to signal an exception when used in operations. SNaN's can be handy to assign to uninitialized variables to trap premature usage.

Semantically, QNaN's denote indeterminate operations, while SNaN's denote invalid operations.

11. Floating-point Error Sources / why most non-robustness occurs?

In this subsection, the unexpected error source that arises due to the floating-point arithmetic is explained. Noticeably, this kind of error source is the main one of all types of error causes. It produces unexpected results in both numeric and geometric computations.

Although well defined, floating-point representations and arithmetic are inherently inexact. A first source of error is that certain numbers are not exactly representable in some number bases. For example, inbase 10, 1/3 is not exactly representable in a fixed number of digits; unlike, say, 0.1. However, in a binary floating-point representation 0.1 is no longer exactly representable but is instead given by the repeating fraction (0.0001100110011...)2. When this number is normalized and rounded off to 24 bits (including the one bit that is not stored) themantissa bit pattern ends in...11001101, where the last least significant bit has been rounded up. The IEEE single-precision representation of 0.1 is therefore slightly larger than 0.1. As virtually all current CPUs use binary floating-point systems, the following code is thus extremely likely to print"Greater than"rather than anything else.

That some numbers are not exactly representable also means that whereas replacingx/2.0fwithx*0.5fis exact (as both 2.0 and 0.5 are exactly representable) replacing x/10.0fwithx*0.1fis not. As multiplications are generally faster than divisions (up to about a magnitude, but the gap is closing on current architectures), the latter replacement is still frequently and deliberately performed for reasons of efficiency.

It is important to realize that floating-point arithmetic does not obey ordinary arithmetic rules. For example, round-off errors may cause a small but nonzero value added to or subtracted from a large value to have no effect. Therefore, mathematically equivalent expressions can produce completely different results when evaluated using floating-point arithmetic.

Consider the expression1.5e3 + 4.5e-6. In real arithmetic, this expression corre-sponds to 1500.0+0.0000045, which equals 1500.0000045. Because single-precision floats can hold only about seven decimal digits, the result is truncated to 1500.0 and digits are lost. Thus, in floating-point arithmetica+bcan equalaeven thoughbis nonzero and bothaandbcan be expressed exactly! A consequence of the presence of truncation errors is that floating-point arith-metic is not associative. In other words,(a+b)+cis not necessarily the same as a+(b+c). Consider the following three values ofa,b, andc.

illustrating the difference between the two expressions. As a corollary, note that a compiler will not (or should not) turnx + 2.0f + 0.5f into x + 2.5f, as this could give a completely different result. The compiler will be able to fold floating-point constants only from the left for left-associative operators (which most operators are). That is, turning2.0f + 0.5f + xinto2.5f + xat com-pile time is sound. Agood coding rule is to place floating-point constants to the left of any variables (that is, unless something else specific is intended with the operation).

Repeated rounding errors cause numbers slowly to erode from the right, from the least significant digits. This leads to loss of precision, which can be a serious problem. Even worse is the problem ofcancellation,which can lead to significant digits being lost at the front (left) of a number. This loss is usually the result of a single operation: the subtraction of two subexpressions nearly equal in value (or, equivalently, the addition of two numbers of near equal magnitude but of opposite sign). As an example, consider computing the discriminatorb 2 −4acof a quadratic equation forb =3.456,a =1.727, andc =1.729. The exact answer is 4· 10 −6 , but single-precision evaluation gives the result as 4.82387·10 −6 , of which only the first single digit is correct! When, as here, errors become significant due to cancella-tion, the cancellation is referred to ascatastrophic. Cancellation does not have to be catastrophic, however. For example, consider the subtraction 1.875−1.625=0.25.

Expressed in binary it reads 1.111−1.101=0.010. Although significant digits have been lost in the subtraction, this time it is not a problem because all three quantities are exactly representable. In this case, the cancellation isbenign. In all, extreme care must be taken when working with floating-point arithmetic. Evensomethingas seemingly simpleas selectingbetween the twoexpressionsa(b−c) andab−acis deceptively difficult. Ifbandcare close in magnitude, the first expres-sion exhibits a cancellation problem. The second expression also has a cancellation problem whenabandacare near equal.When|a|>1, the difference betweenaband acincreases more (compared to that betweenbandc) and the cancellation problem of the second expression is reduced. However, when|a| <1 the termsabandac become smaller, amplifying the cancellation error, while at the same time the can-cellation error in the first expression is scaled down. Selecting the best expression is therefore contingent on knowledge about the values taken by the involved variables. (Other factors should also be considered, such as overflow and underflow errors.)

In many cases rewriting an expression can help avoid a cancellation error. For example, the function

has a cancellation error for large x. This error is removed by rewriting the function as

Many practical examples of avoiding errors in finite-precision calculations are provided in [Acton96].

Geometrical robustness

For example, the computed intersection point of a line with a triangle may actually lie slightly outside the triangle. Similarly, the midpoint of a line segment is unlikely to lie on the supporting line of the segment.

Solution

In this section, the important and practical solutions of dealing with non-robustness in both numeric and geometrical computations will reviewed and summarized.

Those approaches can be simply classified into two types: the algorithmic and the arithmetic.

The algorithmic

1 predicates

Comparing

Orientation test

2D 3D

In circle / in sphere

Degenerations refer to special cases that in some way require special treatment. Degenerations typically manifest themselves through predicate failures. A predicate is an elementary geometric query returning one of a small number of enumerated values. Failures occur when a predicate result is incorrectly computed or when it is not correctly handled by the calling code. In geometrical algorithms, predicates commonly return a result based on the sign of a polynomial expression. In fact, the determinant predicates given in Chapter 3 are examples of this type of predicate. Consider, for example, the ORIENT2D(A,B,C) predicate. Recall that it returns whether the 2D point C is to the left, to the right, or on the directed line AB. When the three points lie on a line, the smallest errors in position for any one point can throw the result either way. If the predicate is evaluated using floating-point arithmetic, the predicate result is therefore likely completely unreliable. Similar robustness problems occur for all other determinant predicates presented in Chapter 3.

Many algorithms are formulated — explicitly or implicitly — in terms of predicates. When these predicates fail, the algorithm as a whole may fail, returning the wrong result (perhaps an impossible result) or may even get stuck in an infinite loop. Addressing degeneracy problems involves fixing the algorithms to deal gracefully with predicates that may return the incorrect result, adjusting the input so that troublesome cases do not appear or relying on extended or arbitrary precision arithmetic to evaluate the predicate expressions exactly.

Merge points

Find topology

Arithmetic

The arbitrary precisions floating-point arithmetic

The Exact Geometric Computation (EGC)



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now