Upload
finn-yerby
View
212
Download
0
Tags:
Embed Size (px)
Citation preview
Can You Trust Your Computer?
CS365 – Mathematics of Computer Science
Spring 2007
David UhrigTiffany Sharrard
Introduction
User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis New Ideas and Solutions Conclusion Questions
User’s Perception of Computers How are computers perceived by
users? A computer is seen as an tool that will
give you an exact answer What a computer may do:
Can create only garbage because of how the computer handles real numbers
Example1048 + 914 – 1048 + 1032 + 615 – 1032
The answer to this is 1529, but most digital computers would return zero
Why?
Real vs. Floating Point Real Number System
Can be written in decimal notation Can be infinite Includes all positive and negative
integers, fractions, and irrational numbers
Real vs. Floating Point Floating Point Number System
A t-digit base b floating-point number form:
± d1d2d3…dt be
Where d1d2d3…dt is the mantissa, b is the base number system, e is the exponent
Real vs. Floating Point Floating Point Number System
(cont’d) The exponent is an integer between
two fixed integer bounds e1 and e2 e1 <= 0 <= e2
Real vs. Floating Point Floating Point Number System
(cont’d) Normalized
Depends on: Base b Length of the mantissa t Bounds for the exponent, e1 and e2
Rounding vs. Chopping Chop
A number is chopped to t digits and all the digits past t are discarded
Example: t = 5 x = 2.5873892874 result = 2.5873
Rounding vs. Chopping Round
A number x is rounded to t digits when x is replaced by a t digit number that approximates x with minimum error
Example: t = 5 x = 2.5873892874 result = 2.5874
Overflow vs. Underflow Overflow
Occurs when the result of a floating point operation is larger than the largest floating point number in the given floating point number system
When this occurs, almost all computers will signal an error message
Overflow vs. Underflow Underflow
Occurs when the result of a computation is smaller than the smallest quantity the computer can store
Some computers don’t see this error because the machine sets the number to zero
Machine Constants
Amount of round-off depends on the floating-point format your computer uses
Before the error can be corrected, the machine constants need to be identified.
Constants vary greatly by hardware IEEE 754 is the Standard for Binary
Floating-Point Arithmetic
Machine Constants
Computer R/C β t L U eCDC CYBER 170 R 2 48 -976 1,071CDC CYBER 205 C 2 47 -28,626 28,718Cray-1 C 2 48 -8,192 8,191DEC VAX (single) R 2 24 -127 127DEC VAX (double) R 2 56 -1,023 1,023HP-11C, 15C R 10 10 -99 99IBM 3033 (single) C 16 6 -64 63IBM 3033 (double) C 16 14 -64 63IBM/PC (single) R 2 24 -126 127IBM/PC (double) R 2 53 -1,022 1,023PRIME 850 (single) C 2 23 -128 127PRIME 850 (double) C 2 47 -32,896 32,639
3.55x10-15
1.42x10-14
7.11x10-15
5.96x10-8
1.11x10-16
5.00x10-10
9.54x10-7
2.22x10-16
5.96x10-8
1.11x10-16
2.38x10-7
1.42x10-14
IEEE 754 Standard
Machine Epsilon
To quantify the amount of round-off error, a round-off unit is specified:
ε - Machine Epsilon, or Machine Precision This is the fractional accuracy of a floating
point number. Represented by:
ƒl(1 + ε) ≥ 1
Where ε is the smallest floating point number the machine can generate.
Computing ɛProgram Output
david@david-laptop:~$ ./findepsiloncurrent Epsilon, 1 + current Epsilon1 2.000000000000000000000.5 1.500000000000000000000.25 1.250000000000000000000.125 1.125000000000000000000.0625 1.062500000000000000000.03125 1.031250000000000000000.015625 1.015625000000000000000.0078125 1.007812500000000000000.00390625 1.003906250000000000000.00195312 1.001953125000000000000.000976562 1.000976562500000000000.000488281 1.000488281250000000000.000244141 1.000244140625000000000.00012207 1.000122070312500000006.10352E-05 1.000061035156250000003.05176E-05 1.000030517578125000001.52588E-05 1.000015258789062500007.62939E-06 1.000007629394531250003.8147E-06 1.000003814697265625001.90735E-06 1.000001907348632812509.53674E-07 1.000000953674316406254.76837E-07 1.000000476837158203122.38419E-07 1.00000023841857910156
Calculated Machine epsilon: 1.19209E-07david@david-laptop:~$
C Code *
#include <stdio.h>
int main(int argc, char **argv) { float machEps=1.0f;
printf("current Epsilon, 1 + current Epsilon\n"); while(1) { printf("%G\t%.20f\n", machEps, (1.0f+machEps)); machEps/=2.0f; //If next epsilon yields 1, then break, because //current epsilon is the machine epsilon. if((float)(1.0+(machEps/2.0)) == 1.0) break; }
printf("\nCalculated Machine epsilon: %G\n", machEps); return 0;}
* - Code borrowed from Wikipedia Entry on Machine Epsilon:http://en.wikipedia.org/wiki/Machine_epsilon
Error Propagation
An optimistic value for the round-off accumulation in performing N arithmetic operations is roughly √(Nɛ).
Could be Nɛ or even larger! Example: Subtractive Cancellation
4-digit base 10 arithmatic:
ƒl [(10000 + 1) – 10000] = 0
(10000 + 1) – 10000 = 1
Error Analysis
Two primary techniques of error analysis Forward Error Analysis
Floating-point representation of the error is subjected to the same mathematical operations as the data itself.
Equation for the error itself
Backward Error Analysis Attempt to regenerate the original mathematical
problem from previously computed solutions Minimizes error generation and propagation
Testing for Error Propagation Use the computed solution in the
original problem Use Double or Extended Precision
rather than Single Precision Rerun the problem with slightly
modified (incorrect) data and look at the results
New Ideas
Increased RAM and Processor speeds allow for more intricate solutions and alternatives to floating point errors.
Nonfloating-point arithmetic implementations
Rational Arithmetic Multiple or Full Precision Arithmetic Scalar and Dot Products of Vectors
Conclusion
User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis New Ideas and Solutions
...Questions?