View
10
Download
0
Category
Preview:
Citation preview
Research Matters
February 25, 2009
Nick HighamDirector of Research
School of Mathematics
1 / 6
A New Approach toProbabilistic Rounding Error Analysis
Nick HighamSchool of Mathematics
The University of Manchesterwww.maths.manchester.ac.uk/~higham
Feng Kang Distinguished LectureChinese Academy of Sciences, Beijing
Joint work with Theo Mary
James Hardy Wilkinson (1919–1986)
Trinity College, CambridgeUniversity, 1936–1939.National Physical Laboratory,1945–.Worked with Turing and thenheaded Pilot ACE group (ACE firstran May 10, 1950).Developed backward erroranalysis.Elected Fellow of the RoyalSociety, 1969ACM Turing Award 1970, SIAMJohn von Neumann Lecture, 1970
Nick Higham New Probabilistic Rounding Error Analysis 2 / 40
James H. Wilkinson (1919–1986) Centenaryhttps://nla-group.org/james-hardy-wilkinson
Wilkinson site; blog posts during 2019.Advances in Numerical Linear Algebra, Manchester,May 29-30, 2019, Celebrating the Centenary of theBirth of James H. Wilkinson.
Nick Higham New Probabilistic Rounding Error Analysis 4 / 40
Today’s Floating-Point Arithmetics
Type Bits Range u = 2−t
bfloat16 half 16 10±38 2−8 ≈ 3.9× 10−3
fp16 half 16 10±5 2−11 ≈ 4.9× 10−4
fp32 single 32 10±38 2−24 ≈ 6.0× 10−8
fp64 double 64 10±308 2−53 ≈ 1.1× 10−16
fp128 quadruple 128 10±4932 2−113 ≈ 9.6× 10−35
fp* forms all IEEE standard, but fp16 storage only.Arithmetic ops (+,−, ∗, /,√) performed as if firstcalculated to infinite precision, then rounded.Default: round to nearest, round to even if tie.
bfloat16 used by Google TPU and forthcoming IntelNervana Neural Network Processor.
Nick Higham New Probabilistic Rounding Error Analysis 5 / 40
Today’s Floating-Point Arithmetics
Type Bits Range u = 2−t
bfloat16 half 16 10±38 2−8 ≈ 3.9× 10−3
fp16 half 16 10±5 2−11 ≈ 4.9× 10−4
fp32 single 32 10±38 2−24 ≈ 6.0× 10−8
fp64 double 64 10±308 2−53 ≈ 1.1× 10−16
fp128 quadruple 128 10±4932 2−113 ≈ 9.6× 10−35
fp* forms all IEEE standard, but fp16 storage only.Arithmetic ops (+,−, ∗, /,√) performed as if firstcalculated to infinite precision, then rounded.Default: round to nearest, round to even if tie.
bfloat16 used by Google TPU and forthcoming IntelNervana Neural Network Processor.
Nick Higham New Probabilistic Rounding Error Analysis 5 / 40
Rounding
For x ∈ R, fl(x) is an element of F nearest to x , and thetransformation x → fl(x) is called rounding (to nearest).
TheoremIf x ∈ R lies in the range of F then
fl(x) = x(1 + δ), |δ| ≤ u.
u := 12β
1−t is the unit roundoff, or machine precision.
The normalized nonneg numbers for β = 2, t = 3:
0 0.5 1.0 2.0 3.0 4.0 5.0 6.0 7.0
Nick Higham New Probabilistic Rounding Error Analysis 6 / 40
Model for Rounding Error Analysis
For x , y ∈ F
fl(x op y) = (x op y)(1 + δ), |δ| ≤ u, op = +,−, ∗, /.
Also for op =√
.
Sometimes more convenient to use
fl(x op y) =x op y1 + δ
, |δ| ≤ u, op = +,−, ∗, /.
Model is weaker than fl(x op y) being correctly rounded.
Nick Higham New Probabilistic Rounding Error Analysis 7 / 40
Model for Rounding Error Analysis
For x , y ∈ F
fl(x op y) = (x op y)(1 + δ), |δ| ≤ u, op = +,−, ∗, /.
Also for op =√
.
Sometimes more convenient to use
fl(x op y) =x op y1 + δ
, |δ| ≤ u, op = +,−, ∗, /.
Model is weaker than fl(x op y) being correctly rounded.
Nick Higham New Probabilistic Rounding Error Analysis 7 / 40
Model vs Correctly Rounded Resulty = x(1 + δ), with |δ| ≤ u does not imply y = fl(x).
β = 10,t = 2
x y |x − y |/x 12101−t
9.185 8.7 5.28e-2 5.00e-29.185 8.8 4.19e-2 5.00e-29.185 8.9 3.10e-2 5.00e-29.185 9.0 2.01e-2 5.00e-29.185 9.1 9.25e-3 5.00e-29.185 9.2 1.63e-3 5.00e-29.185 9.3 1.25e-2 5.00e-29.185 9.4 2.34e-2 5.00e-29.185 9.5 3.43e-2 5.00e-29.185 9.6 4.52e-2 5.00e-29.185 9.7 5.61e-2 5.00e-2
Nick Higham New Probabilistic Rounding Error Analysis 8 / 40
Precision versus Accuracy
fl(abc) = ab(1 + δ1) · c(1 + δ2) |δi | ≤ u,= abc(1 + δ1)(1 + δ2)
≈ abc(1 + δ1 + δ2).
Precision = u.Accuracy ≈ 2u.
Accuracy is not limited by precision
Nick Higham New Probabilistic Rounding Error Analysis 9 / 40
Precision versus Accuracy
fl(abc) = ab(1 + δ1) · c(1 + δ2) |δi | ≤ u,= abc(1 + δ1)(1 + δ2)
≈ abc(1 + δ1 + δ2).
Precision = u.Accuracy ≈ 2u.
Accuracy is not limited by precision
Nick Higham New Probabilistic Rounding Error Analysis 9 / 40
Fused Multiply-Add Instruction
Intel IA-64 architecture and some modern GPUs have afused multiply-add instruction with just one rounding error:
fl(x + y ∗ z) = (x + y ∗ z)(1 + δ), |δ| ≤ u.
With an FMA:Inner product xT y can be computed with half therounding errors.The algorithm
1 w = b ∗ c2 e = w − b ∗ c3 x = (a ∗ d − w) + e
computes x = det([a
cbd
]) with high relative accuracy
(Kahan).
Nick Higham New Probabilistic Rounding Error Analysis 11 / 40
Fused Multiply-Add Instruction (cont.)
ButWhat does a*d + c*b mean?
The product
(x + iy)∗(x + iy) = x2 + y2 + i(xy − yx)
may evaluate to non-real with an FMA.
b2 − 4ac can evaluate negative even when b2 ≥ 4ac.
Nick Higham New Probabilistic Rounding Error Analysis 12 / 40
Error Analysis in Low Precision (1)
For inner product xT y of n-vectors standard error bound is
| fl(xT y)− xT y | ≤ γn|x |T |y |, γn =nu
1− nu, nu < 1.
Can also be written as
| fl(xT y)− xT y | ≤ nu|x |T |y |+ O(u2).
In half precision, u ≈ 4.9× 10−4, so nu = 1 for n = 2048 .
What happens when nu > 1?
Nick Higham New Probabilistic Rounding Error Analysis 13 / 40
Error Analysis in Low Precision (1)
For inner product xT y of n-vectors standard error bound is
| fl(xT y)− xT y | ≤ γn|x |T |y |, γn =nu
1− nu, nu < 1.
Can also be written as
| fl(xT y)− xT y | ≤ nu|x |T |y |+ O(u2).
In half precision, u ≈ 4.9× 10−4, so nu = 1 for n = 2048 .
What happens when nu > 1?
Nick Higham New Probabilistic Rounding Error Analysis 13 / 40
Error Analysis in Low Precision (2)
Rump & Jeannerod (2014) prove that in a number ofstandard rounding error bounds, γn = nu/(1− nu) can bereplaced by nu provided that round to nearest is used.
Analysis nontrivial. Only a few core algs have beenanalyzed.Be’rr bound for Ax = b is now (3n − 2)u + (n2 − n)u2
instead of γ3n.Cannot replace γn by nu in all algs (pairwisesummation).Once nu ≥ 1 bounds cannot guarantee any accuracy,maybe not even a correct exponent!
Nick Higham New Probabilistic Rounding Error Analysis 14 / 40
Error Analysis in Low Precision (2)
Rump & Jeannerod (2014) prove that in a number ofstandard rounding error bounds, γn = nu/(1− nu) can bereplaced by nu provided that round to nearest is used.
Analysis nontrivial. Only a few core algs have beenanalyzed.Be’rr bound for Ax = b is now (3n − 2)u + (n2 − n)u2
instead of γ3n.Cannot replace γn by nu in all algs (pairwisesummation).Once nu ≥ 1 bounds cannot guarantee any accuracy,maybe not even a correct exponent!
Nick Higham New Probabilistic Rounding Error Analysis 14 / 40
A Simple Loop
x = pi; i = 0;while x/2 > 0
x = x/2; i = i+1;endfor k = 1:i
x = 2*x;end
Precision i |x − π|Double 1076 0.858Single 151 0.858Half 26 0.858
Why these large errors?Why the same error for each precision?
Nick Higham New Probabilistic Rounding Error Analysis 15 / 40
A Simple Loop
x = pi; i = 0;while x/2 > 0
x = x/2; i = i+1;endfor k = 1:i
x = 2*x;end
Precision i |x − π|Double 1076 0.858Single 151 0.858Half 26 0.858
Why these large errors?Why the same error for each precision?
Nick Higham New Probabilistic Rounding Error Analysis 15 / 40
Things That Help Reduce Error
A fused multiply-add (FMA) may be available (NVIDIAV100): reduces error bound by factor 2.
Parallel implementations sum by pairwise summation(binary tree), giving error constant log2n instead of n.
80-bit registers on Intel chips may be exploited bythe compiler in evaluating inner products, etc.
Statistical distribution of errors.
Nick Higham New Probabilistic Rounding Error Analysis 16 / 40
The Difficulty of Rounding Error Analysis
Deciding what to try to prove.
Type of analysis:componentwise backward error
normwise forward errormixed backward–forward error
Knowing what model to use.
Keep or discard second order terms?
Ignore possibility of underflow and overflow?
Assume real data?
Nick Higham New Probabilistic Rounding Error Analysis 17 / 40
References for Floating-Point
Handbook of Floating-Point Arithmetic
Jean-Michel MullerNicolas BrunieFlorent de Dinechin Claude-Pierre JeannerodMioara JoldesVincent LefèvreGuillaume MelquiondNathalie Revol Serge Torres
Second Edition
Nick Higham New Probabilistic Rounding Error Analysis 18 / 40
Traditional Bounds Are Pessimistic (1)Traditional bounds are worst-case bounds and arepessimistic on average. For Uniform [−1,1] data:
Matrix–vector product (fp32)
10 1 10 2 10 3 10 410 -8
10 -7
10 -6
10 -5
10 -4
10 -3
Solution of Ax = b (fp32)
10 1 10 2 10 3 10 410 -8
10 -6
10 -4
10 -2
Nick Higham New Probabilistic Rounding Error Analysis 19 / 40
Traditional Bounds Are Pessimistic (2)Matrix–vector product (fp16)
10 0 10 1 10 2 10 310 -4
10 -3
10 -2
10 -1
10 0
Matrix–vector product (fp8)
10 0 10 1 10 2 10 3
10 -1
10 0
Traditional bounds do not provide a realistic picture of thetypical behavior of numerical computations
Nick Higham New Probabilistic Rounding Error Analysis 20 / 40
The Rule of Thumb
“In general, the statistical distribution ofthe rounding errors will reduceconsiderably the function of n occurring inthe relative errors. We might expect ineach case that this function should bereplaced by something which is no biggerthan its square root.”
— Wilkinson, 1961
Nick Higham New Probabilistic Rounding Error Analysis 21 / 40
Statistical Argument
Consider sum of rounding errors s =∑n
i=1 δi , |δi | ≤ u.
Worst-case bound |s| ≤ nu is attainable—unlikely!
Assume δi are independent random variables ofmean zeroCentral limit theorem: for sufficiently large n,
s/√
n ∼ N (0,u)
hence |s| ≤ λ√
nu , with λ a small constant, holds withhigh probability (e.g., 99.7% with λ = 3 by the 3-sigmarule)
s is only linear part of error.What n is “sufficiently large”?
Nick Higham New Probabilistic Rounding Error Analysis 22 / 40
Objective
Fundamental lemma in backward error analysis (H, 2002)If |δi | ≤ u for i = 1 : n and nu < 1 then
n∏i=1
(1 + δi) = 1 + θn,
where|θn| ≤ γn :=
nu1− nu
= nu + O(u2).
The basis of most rounding error analyses.We seek an analogous result with a smaller, butprobabilistic, bound on θn.We will focus on backward error results.
Nick Higham New Probabilistic Rounding Error Analysis 23 / 40
Probabilistic Model of Rounding Errors
In the computation of interest, the quantities δ infl(a op b) = (a op b)(1 + δ), |δ| ≤ u, op ∈ {+,−,×, /}
are independent random variables of mean zero.
“There is no claim that ordinary rounding andchopping are random processes, or thatsuccessive errors are independent. Thequestion to be decided is whether or notthese particular probabilistic models of theprocesses will adequately describe whatactually happens.”
— Hull & Swenson, 1966
Nick Higham New Probabilistic Rounding Error Analysis 24 / 40
Why Might the Model Not be Realistic?
In some cases δ ≡ 0, e.g., for integer operands inx + y , xy , or when an operand is a power of 2 in xy orx/y .
Pairs of operands might be repeated, so different δare the same.
Non-pathological examples can be found whererounding errors are strongly correlated (Kahan).
If an operand comes from an earlier computation it willdepend on an earlier δ and so the new δ will dependon a previous one.
Nick Higham New Probabilistic Rounding Error Analysis 25 / 40
The Key IdeasTransform the product into a sum by taking the logarithm:
S = logn∏
i=1
(1 + δi) =n∑
i=1
log(1 + δi).
Hoeffding’s concentration inequalityLet X1, . . . , Xn be random independent variables satisfying|Xi | ≤ ci . Then the sum S =
∑ni=1 Xi satisfies
Pr(|S − E(S)| ≥ ξ) ≤ 2exp(− ξ2
2∑n
i=1 c2i
).
Apply to Xi = log(1 + δi)⇒ requires boundinglog(1 + δi) and E (log(1 + δi)) using Taylor expansions.Retrieve the result by taking the exponential of S.
Nick Higham New Probabilistic Rounding Error Analysis 26 / 40
The Key IdeasTransform the product into a sum by taking the logarithm:
S = logn∏
i=1
(1 + δi) =n∑
i=1
log(1 + δi).
Hoeffding’s concentration inequalityLet X1, . . . , Xn be random independent variables satisfying|Xi | ≤ ci . Then the sum S =
∑ni=1 Xi satisfies
Pr(|S − E(S)| ≥ ξ) ≤ 2exp(− ξ2
2∑n
i=1 c2i
).
Apply to Xi = log(1 + δi)⇒ requires boundinglog(1 + δi) and E (log(1 + δi)) using Taylor expansions.Retrieve the result by taking the exponential of S.
Nick Higham New Probabilistic Rounding Error Analysis 26 / 40
The Key IdeasTransform the product into a sum by taking the logarithm:
S = logn∏
i=1
(1 + δi) =n∑
i=1
log(1 + δi).
Hoeffding’s concentration inequalityLet X1, . . . , Xn be random independent variables satisfying|Xi | ≤ ci . Then the sum S =
∑ni=1 Xi satisfies
Pr(|S − E(S)| ≥ ξ) ≤ 2exp(− ξ2
2∑n
i=1 c2i
).
Apply to Xi = log(1 + δi)⇒ requires boundinglog(1 + δi) and E (log(1 + δi)) using Taylor expansions.Retrieve the result by taking the exponential of S.
Nick Higham New Probabilistic Rounding Error Analysis 26 / 40
Probabilistic Error Bound
Theorem (H & Mary, 2018)Let δi , i = 1 : n, be independent random variables of meanzero such that |δi | ≤ u. For any constant λ > 0, the relation∏n
i=1(1 + δi) = 1 + θn holds with
|θn| ≤ γn(λ) := exp(λ√
nu +nu2
1− u
)− 1
≤ λ√
nu + O(u2)
with prob of failure P(λ) = 2exp(−λ2(1− u)2/2
)
Key features:Exact bound, not first order.nu < 1 not required.No “n is sufficiently large” assumption.Small values of λ suffice: P(1) ≈ 0.27, P(5) ≤ 10−5.
Nick Higham New Probabilistic Rounding Error Analysis 27 / 40
Probabilistic Error Bound
Theorem (H & Mary, 2018)Let δi , i = 1 : n, be independent random variables of meanzero such that |δi | ≤ u. For any constant λ > 0, the relation∏n
i=1(1 + δi) = 1 + θn holds with
|θn| ≤ γn(λ) := exp(λ√
nu +nu2
1− u
)− 1
≤ λ√
nu + O(u2)
with prob of failure P(λ) = 2exp(−λ2(1− u)2/2
)Key features:
Exact bound, not first order.nu < 1 not required.No “n is sufficiently large” assumption.Small values of λ suffice: P(1) ≈ 0.27, P(5) ≤ 10−5.
Nick Higham New Probabilistic Rounding Error Analysis 27 / 40
Inner Products
TheoremLet y = aT b, where a,b ∈ Rn. No matter what the order ofevaluation the computed y satisfies
y = (a + ∆a)T b,
|∆a| ≤ γn(λ)|a| ≤ λ√
nu|a|+ O(u2),
with probability of failure nP(λ).
Now a factor n in front of P(λ).Can choose any λ > 0.Analogous result for matrix–vector products.
Nick Higham New Probabilistic Rounding Error Analysis 28 / 40
LU Factorization
TheoremThe computed LU factors from Gaussian elimination onA ∈ Rn×n satisfy LU = A + ∆A, where
|∆A| ≤ γn(λ)|L||U|, |γn(λ)| ≤ λ√
nu + O(u2)
holds with probability of failure (n3/3 + n2/2 + 7n/6)P(λ).
Want probabilities independent of n! Fortunately:
O(n3)P(λ) = O(1) ⇒ λ = O(√
logn)
⇒ error grows no faster than√
n logn u. . . and the constant hidden in the big O is small:
n3
3P(13) ≤ 10−5 for n ≤ 1010.
Nick Higham New Probabilistic Rounding Error Analysis 29 / 40
LU Factorization
TheoremThe computed LU factors from Gaussian elimination onA ∈ Rn×n satisfy LU = A + ∆A, where
|∆A| ≤ γn(λ)|L||U|, |γn(λ)| ≤ λ√
nu + O(u2)
holds with probability of failure (n3/3 + n2/2 + 7n/6)P(λ).
Want probabilities independent of n! Fortunately:
O(n3)P(λ) = O(1) ⇒ λ = O(√
logn)
⇒ error grows no faster than√
n logn u. . . and the constant hidden in the big O is small:
n3
3P(13) ≤ 10−5 for n ≤ 1010.
Nick Higham New Probabilistic Rounding Error Analysis 29 / 40
Probabilities of Success for A = LU Bounds
λ n = 102 n = 103 n = 104
7.0 9.9998e-01 9.8474e-01 −1.4265e+017.5 1.0000e+00 9.9959e-01 5.9320e-018.0 1.0000e+00 9.9999e-01 9.9156e-018.5 1.0000e+00 1.0000e+00 9.9986e-019.0 1.0000e+00 1.0000e+00 1.0000e+00
Nick Higham New Probabilistic Rounding Error Analysis 30 / 40
MATLAB Experiments
Simulate fp16 and fp8 with MATLAB function chop (H& Pranesh, 2019).
Compare the bounds γn and γn(λ) with thecomponentwise backward error εbwd (Oettli–Prager):
Matrix–vector product y = Ax : εbwd = maxi|y−y |i(|A||x |)i
Solution to Ax = b via LU factorization:εbwd = maxi |Ax − b|i/(|L||U||x |)i
A and x are chosen.
Random entries are Uniform [−1,1] or Uniform [0,1].
Codes available online: https://gitlab.com/theo.andreas.mary/proberranalysis
Nick Higham New Probabilistic Rounding Error Analysis 31 / 40
Random [−1,1] Entries
Matrix–vector product (fp32)
10 1 10 2 10 3 10 410 -8
10 -7
10 -6
10 -5
10 -4
10 -3
Linear system Ax = b (fp32)
10 1 10 2 10 3 10 410 -8
10 -6
10 -4
10 -2
Probabilistic bound (λ = 1) much closer to the actualerror.But for [−1,1] entries it’s still pessimistic
Nick Higham New Probabilistic Rounding Error Analysis 32 / 40
Random [−1,1] Entries
Matrix–vector product (fp32)
10 1 10 2 10 3 10 410 -8
10 -7
10 -6
10 -5
10 -4
10 -3
Linear system Ax = b (fp32)
10 1 10 2 10 3 10 410 -8
10 -6
10 -4
10 -2
Probabilistic bound (λ = 1) much closer to the actualerror.But for [−1,1] entries it’s still pessimistic
Nick Higham New Probabilistic Rounding Error Analysis 32 / 40
Random [0,1] EntriesMatrix–vector product (fp32)
10 1 10 2 10 3 10 410 -8
10 -7
10 -6
10 -5
10 -4
10 -3
Linear system Ax = b (fp32)
10 1 10 2 10 3 10 410 -8
10 -6
10 -4
10 -2
Prob bound has λ = 1⇒ P(λ) pessimistic . . .. . . but γn bound itself can be sharp and successfullycaptures the
√n error growth
⇒ the bounds cannot be improved without furtherassumptions
Nick Higham New Probabilistic Rounding Error Analysis 33 / 40
Low Precisions, Random [−1,1] EntriesMatrix–vector product (fp16)
10 0 10 1 10 2 10 310 -4
10 -3
10 -2
10 -1
10 0
Matrix–vector product (fp8)
10 0 10 1 10 2 10 3
10 -1
10 0
Nick Higham New Probabilistic Rounding Error Analysis 34 / 40
Low Precisions, Random [0,1] EntriesMatrix–vector product (fp16)
10 0 10 1 10 2 10 310 -4
10 -3
10 -2
10 -1
10 0
Matrix–vector product (fp8)
10 0 10 1 10 2 10 3
10 -1
10 0
Importance of the probabilistic bound becomes evenclearer for lower precisions
Nick Higham New Probabilistic Rounding Error Analysis 35 / 40
Real-Life MatricesSolution of Ax = b (fp64), b from Uniform [0,1],
for 943 matrices from SuiteSparse collection (λ = 1).
101
102
103
104
10-16
10-15
10-14
10-13
10-12
Nick Higham New Probabilistic Rounding Error Analysis 36 / 40
Example: Rounding Errors Not IndependentInner product of twoconstant vectors:
si+1 = si + aibi = si + c⇒ si+1 = (si + c)(1 + δi)
⇒ δi = θ is constant withinintervals [2q−1; 2q]
10 2 10 3 10 4
10 -6
10 -5
10 -4
10 -3
10 -2
2q−1 2q
×
si si+1 si+2 si+3
+c
××θ
+c
××θ
+c
××θ
Nick Higham New Probabilistic Rounding Error Analysis 37 / 40
Example: Rounding Errors Not IndependentInner product of twoconstant vectors:
si+1 = si + aibi = si + c⇒ si+1 = (si + c)(1 + δi)
⇒ δi = θ is constant withinintervals [2q−1; 2q]
10 2 10 3 10 4
10 -6
10 -5
10 -4
10 -3
10 -2
2q−1 2q
×
si si+1 si+2 si+3
+c
××θ
+c
××θ
+c
××θ
Nick Higham New Probabilistic Rounding Error Analysis 37 / 40
Example: Rounding Errors Not IndependentInner product of twoconstant vectors:
si+1 = si + aibi = si + c⇒ si+1 = (si + c)(1 + δi)
⇒ δi = θ is constant withinintervals [2q−1; 2q]
10 2 10 3 10 4
10 -6
10 -5
10 -4
10 -3
10 -2
2q−1 2q
×
si si+1 si+2 si+3
+c
××θ
+c
××θ
+c
××θ
Nick Higham New Probabilistic Rounding Error Analysis 37 / 40
Example: Rounding Errors Nonzero MeanInner product of two very large nonnegative vectors:
si+1 = si + aibi ⇒ si+1 = (si + aibi)(1 + δi)
100
102
104
106
108
10-10
10-5
100
Top: 1 ≤ n ≤ 106
Bottom: 106 ≤ n ≤ 108
Explanation: si ⇑ and at some point it is so large thatsi+1 = si ⇒ δi = −aibi/(si + aibi) < 0
Nick Higham New Probabilistic Rounding Error Analysis 38 / 40
Example: Rounding Errors Nonzero MeanInner product of two very large nonnegative vectors:
si+1 = si + aibi ⇒ si+1 = (si + aibi)(1 + δi)
100
102
104
106
108
10-10
10-5
100
Top: 1 ≤ n ≤ 106
Bottom: 106 ≤ n ≤ 108
Explanation: si ⇑ and at some point it is so large thatsi+1 = si ⇒ δi = −aibi/(si + aibi) < 0
Nick Higham New Probabilistic Rounding Error Analysis 38 / 40
Example: Rounding Errors Nonzero MeanInner product of two very large nonnegative vectors:
si+1 = si + aibi ⇒ si+1 = (si + aibi)(1 + δi)
100
102
104
106
108
10-10
10-5
100 Top: 1 ≤ n ≤ 106
Bottom: 106 ≤ n ≤ 108
Explanation: si ⇑ and at some point it is so large thatsi+1 = si ⇒ δi = −aibi/(si + aibi) < 0
Nick Higham New Probabilistic Rounding Error Analysis 38 / 40
Conclusions
Given first rigorous justification of the “take thesquare root of the constant in error bound” rule ofthumb—and our results hold for any n!
Experiments show prob bounds give betterpredictions than deterministic ones for both randomand real-life matrices and can be sharp, though fail intwo special situations—consistent with
The fact that rounding errors are neither randomnor uncorrelated will not in itself preclude thepossibility of modelling them usefully byuncorrelated random variables.
— Kahan, 1996
and answers Hull and Swenson’s question.
Nick Higham New Probabilistic Rounding Error Analysis 39 / 40
Manchester Numerical Linear Algebra Grouphttps://nla-group.org
Nick Higham New Probabilistic Rounding Error Analysis 40 / 40
References I
N. J. Higham.Accuracy and Stability of Numerical Algorithms.Society for Industrial and Applied Mathematics,Philadelphia, PA, USA, second edition, 2002.ISBN 0-89871-521-0.xxx+680 pp.
N. J. Higham and T. Mary.A new approach to probabilistic rounding error analysis.MIMS EPrint 2018.33, Manchester Institute forMathematical Sciences, The University of Manchester,UK, Nov. 2018.22 pp.Revised March 2019.
Nick Higham New Probabilistic Rounding Error Analysis 1 / 3
References II
N. J. Higham and S. Pranesh.Simulating low precision floating-point arithmetic.MIMS EPrint 2019.4, Manchester Institute forMathematical Sciences, The University of Manchester,UK, Mar. 2019.17 pp.
T. E. Hull and J. R. Swenson.Tests of probabilistic models for propagation of roundofferrors.Comm. ACM, 9(2):108–113, 1966.
Nick Higham New Probabilistic Rounding Error Analysis 2 / 3
References III
W. Kahan.The improbability of probabilistic error analyses fornumerical computations.Manuscript, Mar. 1996.
S. M. Rump and C.-P. Jeannerod.Improved backward error bounds for LU and Choleskyfactorizations.SIAM J. Matrix Anal. Appl., 35(2):684–698, 2014.
J. H. Wilkinson.Error analysis of direct methods of matrix inversion.J. Assoc. Comput. Mach., 8:281–330, 1961.
Nick Higham New Probabilistic Rounding Error Analysis 3 / 3
Recommended