Upload
katexx7
View
224
Download
0
Embed Size (px)
Citation preview
7/27/2019 Adaptive Cross Approximation (ACA)
1/27
Adaptive Cross Approximation ofMultivariate Functions
Mario Bebendorf
no. 453
Diese Arbeit ist mit Untersttzung des von der Deutschen Forschungs-
gemeinschaft getragenen Sonderforschungsbereichs 611 an der Universitt
Bonn entstanden und als Manuskript vervielfltigt worden.
Bonn, August 2009
7/27/2019 Adaptive Cross Approximation (ACA)
2/27
Adaptive Cross Approximation of Multivariate
Functions
M. Bebendorf
August 11, 2009
In this article we present and analyze a new scheme for the approximation of multivari-
ate functions (d = 3, 4) by sums of products of univariate functions. The method is basedon the Adaptive Cross Approximation (ACA) initially designed for the approximation ofbivariate functions. To demonstrate the linear complexity of the schemes we apply it tolarge-scale multidimensional arrays generated by the evaluation of functions.
AMS Subject Classification: 41A80, 41A63, 15A69.Keywords: data compression, dimensionality reduction, adaptive cross approximation.
1 Introduction
Representations of functions of several variables by sums of functions of fever variables have been
investigated in many publications; see [22] and the references therein. The best L2-approximation offunctions of two variables is shown in [23, 25] to be given by the truncated Hilbert-Schmidt decom-position. This result was extended by Pospelov [21] to the approximation of functions in d variablesby sums of products of functions of one variable. For some classes of fuctions the approximation byspecific function systems such as exponential functions might be advantageous; see [8, 9], [7]. Oneof the best known decompositions in statistics is the analysis of variance (ANOVA) decomposition[15]. Related to this field of research are sparse grid approximations; see [28, 10].
An important application of this kind of approximation is the approximation of multidimensionalarrays generated by the evaluation of functions. In this case a d-dimensional tensor is approximatedby the tensor product of a small number of vectors, which significantly improves the computationalcomplexity. Multidimensional arrays of data appear in many different applications, e.g. statistics,
chemometrics, and finance. While for d = 2 a result due to Eckart and Young [12] states that theoptimal rank-k approximation can be computed via the singular value decomposition (SVD), thegeneralization of the SVD for tensors of order more than two is not clear; see the counterexample in[18]. In the Tucker model [27] a third order tensor is approximated by
r1i=1
r2j=1
r3k=1
gijk xi yj zk
with the so-called core array g and the Tucker vectors xi, yj, and zk. The PARAFAC model [11] usesdiagonal core arrays. A method for third order tensors is proposed in [16]. Multi-level decompositions
Institut fur Numerische Simulation, Rheinische Friedrich-Wilhelms-Universitat Bonn, Wegelerstrasse 6, 53115 Bonn,Germany, Tel. +49 228 733144, [email protected] .
1
7/27/2019 Adaptive Cross Approximation (ACA)
3/27
are presented in [17]. The most popular method is based on alternating least squares minimization[19, 26]. In [29] an incremental method, i.e. the approximation is successively constructed from therespective remainder, is proposed. While our technique also is based on successive rank-1 approxi-mations, in [29] the optimal rank-1 approximation is computed via a generalized Rayleigh quotientiteration. Since all previous methods require the whole matrix for constructing the respective ap-proximation, they may be computationally still too expensive. We present a method having linearcomplexity using a small portion of the original matrix entries.
In [3] the adaptive cross approximation (ACA) was introduced. ACA approximates bivariatefunctions by sums of products of univariate functions. A characteristic property of ACA is thatthe approximation is constructed from restrictions of to lower dimensional domains of definition,i.e.
(x, y) k
i,j=1
ij(xi, y)(x, yj)
with points xi, yj and coefficients ij which constitute the inverse of the matrix (xi, yj), i, j =1, . . . , k. The advantages of the fact that the restrictions (xi, y) and (x, yj), i, j = 1, . . . , k, areused, are manifold. First of all it can be seen that this kind of approximation allows to guaranteequasi-optimal accuracy, i.e. the quality of the approximation will (up to constants) be at least as goodas the approximation in any other system of functions of the same cardinality. Furthermore, matrixversions of ACA are able to construct approximations without computing all the matrix entries inadvance, only the entries corresponding to the restrictions have to be evaluated. Furthermore, themethod is adaptive because it is able to find the required rank k in the course of the approximation.
In the present article the adaptive cross approximation will be extended to functions of three andfour variables. The latter two classes of functions will be treated by algorithms which together withthe bivariate ACA can be investigated in the general setting of what we call incremental approxima-tion. These will be introduced and investigated in Sect. 2. The convergence analysis of the bivariateACA can be obtained as a special case. Furthermore, in Sect. 3 we will show convergence also forsingular functions. A principle difference appears if ACA is extended to more than two variables, be-cause the dimension of the restricted domains of definition of the approximations are still more thanone-dimensional. Hence, further approximation of these restrictions by an ACA of lower dimensionis required. As a consequence, the influence of perturbation on the convergence has to be analyzed.In the trivariate case treated in Sect. 4 one additional approximation using bivariate ACA per stepis sufficient. The approximation of functions of four variables is treated in Sect. 5 and requires twoadditional bivariate ACA approximations per step. To demonstrate the linear complexity of thepresented techniques, our theoretical findings are accompanied by the application of the presentedschemes to large-scale multidimensional arrays. A method that is similar to the kind of matrix ap-proximation treated in this article (at least for the trivariate case) was presented in [20]. Althoughit is proved in [20] that low-rank approximations exist, the convergence of the actual scheme has notbeen analyzed.
As the need for techniques required to analyze the influence of perturbations on the convergencealready appears in the cases of three and four variables, the results of this article are expected tobe useful also for problems of more than four dimensions, because algorithms can be constructed byrecursive bisection of the set of variables; see also [14]. In this sense this article lays ground to theadaptive cross approximation of high-dimensional functions.
2
7/27/2019 Adaptive Cross Approximation (ACA)
4/27
7/27/2019 Adaptive Cross Approximation (ACA)
5/27
defined on the components of (k) : X Ck,
(k) := U1k
1...
k
,
will play an important role in the stability analysis.
Lemma 2. It holds that
UTk
f(x1)...
f(xk)
=
r0[f](x1)1(x1)
...rk1[f](xk)
k(xk)
. (4)
Hence,
sk[f] =k
i=1
ri1[f](xi)i
i(xi).
Proof. Formula (4) is obviously true for k = 1. Assume that it is valid for k 1, then using (3)
f(x1)...
f(xk)
T
U1k =
f(x1)...
f(xk)
TU1k1 vk1
1k(xk)
,
where
vk1 = U1k1
1(xk)...
k1(xk)
/k(xk).
From (2) together with the assumption, we obtain
f(x1)...
f(xk)
T
U1k =
r0(x1)
1(x1), . . . ,
rk2(xk1)k1(xk1)
,
f(x1)...
f(xk1)
T
vk1 +f(xk)k(xk)
= r0(x1)1(x1) , . . . , rk1(xk)k(xk)
.
In the following lemma we will investigate under which conditions on k the function sk[f] in-terpolates f. Notice that rk[f](xk) = 0. However, rk[f](xj) does not vanish for j < k in general.The desired interpolation property will be characterized by the coincidence of the upper triangularmatrix Uk with the k k matrix
Mk :=
1(x1) . . . 1(xk)...
...k(x1) . . . k(xk)
or equivalently by i(xj) = 0 for i > j.
Lemma 3. For 1 j < k it holds that
rk[f](xj) =
f(x1)
...f(xk)
T
U1k
0...0
j+1(xj).
..k(xj)
. (5)
4
7/27/2019 Adaptive Cross Approximation (ACA)
6/27
Hence, sk[f](xj) = f(xj), 1 j k, if Mk = Uk. In the other case we have that sk[f](xj) = (fk)j,1 j k, where
fk := (U1k Mk)
T
f(x1)...
f(xk)
.
Proof. Since rj(xj) = 0, we have that
f(xj) =
f(x1)...
f(xj)
T
U1j
1(xj)...
j(xj)
.
Formula (5) follows from
rk(xj) = f(xj) f(x1)
.
..f(xk)
T
U1k
1(xj)...
k(xj)
(6)and the upper triangular structure of Uk which contains Uj in the leading j j subblock.
The second part
sk(x1)...
sk(xk)
T
=
f(x1)...
f(xk)
T
U1k Mk
follows from sk = f rk and (6).
Remark 1. Let Mk be non-singular. We denote by M(i)k (x) Ckk the matrix which arises from
replacing the i-th column of Mk by the vector vk := [1(x), . . . , k(x)]T. The functions
Li[f](x) := (M1k vk)i =
det M(i)k (x)
det Mk span{1, . . . , k}
are Lagrange functions for the points x1, . . . , xk, i.e. Li[f](xj) = ij, i, j = 1, . . . , k. As a consequenceof Lemma 3, the approximation
sk[f] =k
i=1
(fk)iLi[f]
is the uniquely defined Lagrangian interpolating function.
The following lemmas will help estimating the remainder rk of the approximation. We first observethe following property.
Lemma 4. Let Mk = Uk and let functions 1, . . . , k satisfy span{1, . . . , k} = span{1, . . . , k}.Then M Ckk defined by Mij = i(xj), i, j = 1, . . . , k, is non-singular and
rk[f] = f
f(x1)...
f(xk)
T
M1k
1...
k
.
5
7/27/2019 Adaptive Cross Approximation (ACA)
7/27
Proof. Let C Ckk be an invertible matrix such that
1...
k
= C
1...
k
.
Then it follows from Mk = CMk that Mk is invertible and
M1k
1...
k
= M1k
1...
k
.
Lemma 1 gives the assertion.
In the following lemma the remainder rk[f] is expressed as the error
Ek[f](x) := f(x)
f(x1)...
f(xk)
T
k(x)
of a linear approximation in an arbitrary system {1, . . . , k} C(X) of functions. Here, we setk := [1, . . . , k]
T.
Lemma 5. It holds that
rk[f] =
Ek[f]
f(x1)
...f(xk)
T
U1k (Mk
Uk)k
f(x1)...
f(xk)
T
U1k Ek[1]
...Ek[k]
.In particular, if Mk = Uk, then
rk[f] = Ek[f]
f(x1)...
f(xk)
T
M1k
Ek[1]...
Ek[k]
,
where Mk and 1, . . . , k are as in Lemma 4.
Proof. The assertion follows from
rk = f
f(x1)...
f(xk)
T
U1k
1...
k
= f
f(x1)...
f(xk)
T
U1k Mkk
f(x1)...
f(xk)
T
U1k
1...
k
Mkk
= f
f(x1)...
f(xk)
T
k
f(x1)...
f(xk)
T
U1k (Mk Uk) k
f(x1)...
f(xk)
T
U1k
Ek[1]...
Ek[k]
.
The second part of the assertion follows from Lemma 4.
6
7/27/2019 Adaptive Cross Approximation (ACA)
8/27
Remark 2. The previous lemma relates the remainder rk[f] to the error Ek[f] in the system k.Assume that k are Lagrange functions, i.e. i(xj) = ij, i, j = 1, . . . , k. If rk[f] is to be estimatedby the best approximation error in k, then one can use the estimate
Ek[f]
(1 +
Ik
) infk
f
, (7)
where Ik : C(X) C(X) defined as Ikf =k
i=1 f(xi)i denotes the interpolation operator in kand Ik := sup{Ikf, f C(X), f = 1}. Estimate (7) is a consequence of
f Ikf f + Ik(f ) (1 + Ik)f for all k.
2.1 Perturbation analysis
The next question we are going to investigate is how approximations k to the functions k influence
the approximation error rk[f], i.e., we will compare the remainder rk[f] with rk[f] defined by r0[f] = fand
rk[f] = rk1[f] rk1[f](xk)k(xk)
k, k = 1, 2, . . . . (8)
In (8) xk is chosen such that k(xk) = 0. Define Uk, (k), and i,k by replacing i in the respectivedefinition with i. Notice that we use the same points xk from the construction of rk[f] also for theconstruction of rk[f]. Therefore, we have to make sure that Uk is invertible.
Lemma 6. Let i := i i such that i < |i(xi)|, i = 1, . . . , k. Then i(xi) = 0 and
rk[f]
rk[f]
k
i=1
|ri1[f](xi)||i(xi)|
(i,k + 1) i.
Proof. From |i(xi)| |i(xi)| i > 0 it follows that Uk is invertible. Setting
Ek = Uk Uk and (k) =
1 1...
k k
,
due to Lemma 1 and Lemma 2 we have that
rk rk = f(x1)
.
..f(xk)
T
U1k 1...
k
U1k 1...
k
= f(x1)
.
..f(xk)
T
U1k(Uk + Ek)U1k
1...
k
1...
k
=
f(x1)...
f(xk)
T
U1k
(k) + EkU1k
1...
k
=
r0(x1)1(x1)
...rk1(xk)k(xk)
T(k) + Ek
(k)
.
The assertion follows from (k)i i, (Ek)ij i, and (Ek)ij = 0 for i > j.In addition to (8) one may also consider the following scheme, in which rk1[f](xk) is replaced by
some value akC:
rk[f] = rk1[f] akk(xk)
k. (9)
7
7/27/2019 Adaptive Cross Approximation (ACA)
9/27
Then rk[f] will usually neither vanish in the points xj, 1 j < k, nor a representation (2) will hold.However, the following lemma can be proved. We will make use of the recursive relation
(k)i =
(k1)i
kk(xk)
(k1)i (xk), i = 1, . . . , k 1, (10a)
(k)k =
kk(xk)
, (10b)
for the components of (k), which follows from (3).
Lemma 7. Let j := rj1[f](xj) aj such that |j | , 1 j k. Then
rk[f] = rk[f] +
1...
k
T
U1k
1...
k
.
Hence,
rk[f] rk[f] supxX
kj=1
|(k)j (x)| |j | 1,k.
Proof. The assertion is proved by induction. It is obviously true for k = 1. Assume that it is validfor k 1. Then
rk = rk1 rk1(xk)
k(xk)k +
k
k(xk)k
= rk1 +
1...
k1
T
(k1) kk(xk)
rk1(xk) +
1...
k1
T
(k1)(xk)
+
kk(xk)
k
= rk +
1...
k1
T
(k1) kk(xk)
1...
k1
T
(k1)(xk) +k
k(xk)k.
Equation (10) finishes the proof.
2.2 Estimating the amplification factors
As it can be seen from Lemma 6 and Lemma 7, for the perturbation analysis it is crucial to estimate
the size of the amplification factors
(k)
. The following lemma is an obvious consequence of (10).Let vj : X C, j = 1, . . . , k, be given functions and set
(k) := U1k
v1...
vk
.
Lemma 8. If there is 1 such that vi |i(xi)| for i = 1, . . . , k, then
(k)i
1 +
k1j=i
(j)i
.
Furthermore, (k)i (1 + )ki provided that i |i(xi)| for some R.
8
7/27/2019 Adaptive Cross Approximation (ACA)
10/27
Proof. It readily follows from (10) that (k)i (1+ )ki. Similar to (10) we obtain the followingrecursion formula for the components of (k):
(k)i =
(k1)i
vkk(xk)
(k1)i (xk), i = 1, . . . , k 1,
(k)k =vk
k(xk),
which shows (i)i and the recursive relation
(j+1)i (j)i + (j)i , j = k 1, . . . , i .
Hence, we obtain that
(k)i + k1j=i
(j)i .
In particular the previous lemma shows that (k)i 2ki provided that xi maximizes |i|. Inthis case we have
i,k k
=i
(k) k
=i
2k = 2ki+1 1.
If i, i = 1, . . . , k, are smooth, i.e. if we may assume that
|Eki[i+](x)| |i+(xi+)|, = 0, . . . , k i, (11)
with some > 0 and ifi(xj) = 0, i > j, then significantly better bounds for (k)i can be derived.Due to (10) each (k)i , 1 i k, can be regarded as a remainder function of an approximation oftype (1) with f :=
(i)i and
j :=
(i+j)i+j , because it follows from (10) that
(i)i (x) =
i(x)
i(xi),
(i+j)i (x) =
(i+j1)i (x)
i+j(x)
i+j(xi+j)(i+j1)i (xi+j), j = 1, . . . , k i.
Since span{1, . . . , ki} = span{i+1, . . . , k}, Lemma 5 shows that
(k)i (x) = Eki[(i)i ](x)
(i)
i
(xi+1)...
(i)i (xk)
T
(Mki)1Eki
[i+1](x)...
Eki[k](x)
,
where (Mki) = i+(xi+), , = 1, . . . , k i. Hence, from (11) and Lemma 8 we obtain that
(k)i + ki=1
|(i)i (xi+)|1 + ki1
j=
2j
2ki, (12)
because Mki = Uki and |(i)i (xj)| 1, j = i + 1, . . . , k, provided that xi maximizes |i|.
We will, however, see in the numerical examples that typically (k)
i is significantly smaller thanpredicted by our worst-case estimates.
9
7/27/2019 Adaptive Cross Approximation (ACA)
11/27
3 Adaptive Cross Approximation
The adaptive cross approximation (ACA) was introduced for Nystrom matrices [3] and extended tocollocation matrices [6]. A version with refined pivoting strategy and a generalization of the methodto Galerkin matrices was presented in [5]. The following recursion is in the core of this method.
Given : X Y C, let R0(x, y) = (x, y) and
Rk(x, y) = Rk1(x, y) Rk1(x, yk)Rk1(xk, y)Rk1(xk, yk)
, k = 1, 2, . . . . (13)
The points xk and yk are chosen such that Rk1(xk, yk) = 0. The previous recursion corresponds tothe choice
f := x, i := Ri1(xi, )in (1) if x X is treated as a parameter. Then Rk(x, y) = rk[x](y) holds, where x is definedby x(y) = (x, y) for all x X, and i(yj) = 0 for i > j can be seen from inductively applyingLemma 3. Since
span{1, . . . , k} = span{(x1, ), . . . , (xk, )},we see that i := (xi, ) is a possible choice in Lemma 4. Hence, the degenerate approximationSk := Rk of has the representation
Sk(x, y) =
(x, y1)...
(x, yk)
T
M1k
(x1, y)...
(xk, y)
= k
i,j=1
(M1k )ij(x, yi)(xj , y)
with (Mk)ij = (xi, yj), i, j = 1, . . . , k.In Lemma 5 we showed how the remainder Rk of the approximation can be estimated by relating
the approximation to linear approximation in any system {1, . . . , k}. In particular we obtain fromthe second part of Lemma 5 that
Rk(x, y) = Ek[x](y)
Ek[x1 ](y)...
Ek[xk ](y)
T
(k)(x),
where
(k)(x) := MTk
(x, y1)...
(x, yk)
Ck
can be regarded as an amplification factor with respect to x. Therefore, we obtain
|Rk(x, y)| (1,k + 1) maxz{x,x1,...,xk}
|Ek[z ](y)|. (14)
Similar to Remark 1 we denote by M(i)k (x) the matrix which arises from Mk by replacing the i-th
row with the vector [(x, y1), . . . , (x, yk)]. Then due to Cramers rule we have that
(k)i (x) =
det M(i)k (x)
det Mk.
Hence, if the pivoting points xi, i = 1, . . . , k, are chosen such that
| det M(i)k (x)| | det Mk| for all x X and i = 1, . . . , k ,
10
7/27/2019 Adaptive Cross Approximation (ACA)
12/27
then (k)i = 1, and we obtain|Rk(x, y)| (k + 1) max
z{x,x1,...,xk}|Ek[z](y)|.
In this case of so-called matrices of maximum volume we also refer to the error estimates in [24]which are based on the technique of exact anhilators; see [2, 1]. In practice it is, however, difficultto find matrices of maximum volume. In Lemma 8 we observed that
(k)i 2ki, i = 1, . . . , k ,under the realistic condition
|Rk1(x, yk)| |Rk1(xk, yk)| for all x X. (15)In this case (14) becomes
|Rk(x, y)| 2k maxz{x,x1,...,xk}
|Ek[z ](y)|.
If is sufficiently smooth with respect to y and if the system k is appropriately chosen, then it canbe expected that for all x X
Ek[x] k (16)with some 0 < < 1, which results in an approximation error of the order (2)k. Hence, ACAconvergences if < 1/2. Notice that the choice of yk does not improve our estimates on theamplification factors (k). However, it is important for obtaining a reasonable decay of the errorEk[x]; for details see [5].
Up to now we have exploited that is smooth only with respect to the second variable y. If is smooth also with respect to x, then the arguments from the end of Sect. 2.2 can be applied toimprove the error estimate. Condition (11) is satisfied, because according to assumption (16) for
= 0, . . . , k i we have that|Eki[Ri+1(, yi+)](x)| |Ri+1(xi+, yi+)|, ki.
Hence, if is smooth also with respect to x, then according to (12) we obtain (k)i 2ki and
|Rk(x, y)| kk
i=1
2kiki 2k2kk
i=1
2i
i 2k2k
1 2,
which converges for < 1/
2.Although the estimate has improved, the dependence of 1,k on k is still exponential. The actual
growth with respect to k seems to be significantly slower; see the following numerical examples.
3.1 Matrix approximation
The previous estimates can be applied when approximating function generated matrices
aij = (pi, qj), i = 1, . . . , m, j = 1, . . . , n ,
with pi X and qj Y. In this case (13) becomes the following matrix iteration. Starting fromR0 := A, find a nonzero pivot (ik, jk) in Rk and subtract a scaled outer product of the ik-th row andthe jk-th column:
Rk+1 := Rk [(Rk)ikjk ]1(Rk)1:m,jk(Rk)ik,1:n,
where we use the notations vk := (Rk1)ik,1:n and uk := (Rk1)1:m,jk for the ik-th row and the jk-thcolumn of Rk1, respectively. We use (15) to select ik. The choice of jk is detailed in [5].
11
7/27/2019 Adaptive Cross Approximation (ACA)
13/27
7/27/2019 Adaptive Cross Approximation (ACA)
14/27
n\k 1 2 3 4 5 6320 000 1.00 1.04 1.25 2.63 4.70 3.61640 000 1.00 1.04 1.26 2.61 3.76 3.70
1 280 000 1.00 1.05 1.27 2.54 3.01 3.08
Table 2: Amplification factors 1,k for the case = 107.
3.2 Application to singular functions
In previous publications the adaptive cross approximation was applied to functions on domainsX Y which are well-separated from singular points of . The following lemma shows the rateof convergence in the case that X Y approaches singular points. As an important prototype weconsider
(x, y) := (|x|q + |y|q)1/q (21)with arbitrary q N.Theorem 1. Let 1, 2 > 0 and X {x Rd : x2 > 1}, Y {y Rd : y2 > 2}. Then for Rkapplied to from (21) there is a constant ck > 0 such that
|Rk(x, y)| ck 8 21/q
(q1 + q2)
1/qe
k/q, x X, y Y.
Proof. In [9] it is proved that for the approximation of the function f(t) := t1/q by exponentialsums
sk(t) :=k
i=1
ieit, i, i R,
it holds that
f sk[,) = mini,i
f sk[,) 8 21/q
1/qe
k/q .
Without loss of generality we may assume that the coefficients i are pairwise distinct.Setting i(y) := e
i|y|q
, for x X it holds that sk(|x|q + | |q) k. Hence, for the approximationof on X Y we obtain that
supxX
infk
(x, ) ,Y supxX
f(|x|q + | |q) sk(|x|q + | |q),Y
f sk[q1+q2 ,) 8 21/q
(q1 + q2)
1/qe
k/q .
The assertion follows from (14) and Remark 2 with ck := (1 + Ik)(1,k + 1).As a consequence, the rank k required to guarantee an error of order > 0 depends logarithmically
on both and the maximum := max{1, 2} of the distances to the singularity provided thatck e
2
k/q :
18 21/qe2
k/q < k > 4q2
[| log | + | log | + (3 + 1/q) log 2]2 . (22)
We will now construct matrix approximations for matrices A Rnn generated by (21) for q = 2, 5and = 105. Table 3 shows that in contrast to Table 1 the rank increases with the problem size
due to the singularity of . As predicted in (22) the dependence is logarithmic. The column labeledfactor shows the compression ratio 2k/m, i.e. the ratio of the number of units of memory required
13
7/27/2019 Adaptive Cross Approximation (ACA)
15/27
7/27/2019 Adaptive Cross Approximation (ACA)
16/27
Lemma 9. The function Sk is of the form
Sk(x,y ,z) =k
=1
R1(x, y, z)
R1(x, y, z)
k,=1
()R1(x, y , z() )R1(x, y
() , z) (24a)
=k
=1
(x, y, z)k
i,j=1
k,=1
(ij) (xi, y , z(i) )(xj , y
(j) , z). (24b)
with points y(j) , z
(i) and suitable coefficients
(),
(ij) .
Proof. We have seen in Sect. 3 that
Ayz [f](y, z) =k
,=1
f(y, z)f(y
, z)
with coefficients and points y, z
depending on f. Hence, it is easy to see that
S1(x,y ,z) = (x, y1, z1)k
,=1
(x1, y1, z1)
(x1, y , z(1) )(x1, y
(1) , z).
Assume that the assertion is valid for k 1. Since
Sk1(xk, y(k) , z) =
k=1
k1j=1
()j (xj , y
(j) , z) and Sk1(xk, y , z
(k) ) =
k=1
k1i=1
()i (xi, y , z
(i) ),
where
()j =
k1i,=1
(xk, y, z)k
=1
(ij) (xi, y
(k) , z
(i) )
and
()i =
k1j,=1
(xk, y, z)k
=1
(ij) (xj , y
(j) , z
(k) ),
it follows that
Rk1(xk, y(k) , z) = (xk, y
(k) , z) Sk1(xk, y(k) , z) =
k
=1k
j=1()j (xj , y
(j) , z)
and similarly
Rk1(xk, y , z(k) ) =
k=1
ki=1
()i (xi, y , z
(i) ).
Hence
Ayz [Rk1|xk ](y, z) =k
,=1
(k)Rk1(xk, y , z(k) )Rk1(xk, y
(k) , z)
=
ki,j=1
k,=1
(ij)
(xi, y , z
(i)
) (xj , y
(j)
, z),
15
7/27/2019 Adaptive Cross Approximation (ACA)
17/27
where (ij) :=
k,=1
(k)
()i
()j . Together with
Rk1(x, yk, zk)
Rk1(xk, yk, zk)=
k
=1(k) (x, y, z)
we obtain the assertion.
Whereas in the bivariate case only the amplification factors 1,k entered the error estimates, theperturbation introduced by Ayz [Rk1|xk ] is also amplified by the expression
c(k)piv := maxyY, zZ
|Ayz [Rk1|xk ](y, z)||Rk1(xk, yk, zk)|
as we shall see in the following theorem. Notice that the factor c(k)piv can be evaluated easily in each
step of the iteration to check its size.
Theorem 2. Let > 0 be sufficiently small, and for j = 1, . . . , k assume that
supyY, zZ
|Rj1(xj, y , z) Ayz [Rj1|xj ](y, z)| .
Then for x X, y Y, and z Z
|Rk(x,y ,z)| (1,k + 1) max{x,x1,...,xk}
Ek[],YZ + ck,
where ck := 1,k + 2k
j=1 1,j1k
i=j(c(i)piv + 1)(i,k + 1).
Proof. Notice that |Rk(x,y,z)| was estimated in the last section if (y, z) is treated as a single variable;see (14). Hence, |Rk(x,y ,z)| (1,k + 1) max
{x,x1,...,xk}Ek[],YZ.
Furthermore, for fixed y, z we have that Rk = rk[y,z ] is of type (1) if we choose k := rk1[yk,zk ],while the recursion for Rk is of type (9), i.e. Rk(x,y,z) = rk[y,z ](x) for the choice
k := rk1[yk,zk ] = Rk1(, yk, zk), ak := Ayz [Rk1|xk ](y, z).We obtain from Lemma 7 that
rk[y,z ] rk[y,z ] k
j=1(k)j |rj1[y,z ](xj) aj | 1,k,
because|rj1[y,z ](xj) aj| = |Rj1(xj , y , z) Ayz [Rj1|xj ](y, z)| .
Let Fk := supy,z rk1[y,z ] rk1[y,z ]. Then it follows that
k k = rk1[yk,zk ] rk1[yk,zk ] rk1[yk,zk ] rk1[yk,zk ] + rk1[yk,zk ] rk1[yk ,zk] Fk + 1,k1.
For sufficiently small we may assume that
Fk + (1,k1 + 1) 12 |k(xk)|. (25)
16
7/27/2019 Adaptive Cross Approximation (ACA)
18/27
Then Lemma 6 proves the estimate
Fk+1 k
i=1
i(Fi + 1,i1) (26)
with
i :=supy,z |ri1[y,z ](xi)|
|i(xi)| (i,k + 1).
From |ri1[y,z ](xi) Ayz [Ri1|xi ](y, z)| Fi + (1,i1 + 1) we obtain thatsupy,z |ri1[y,z ](xi)|
|i(xi)| supy,z |Ayz [Ri1|xi ](y, z)| + Fi + (1,i1 + 1)
|i(xi)| Fi 1,i1 2c(i)piv + 1
due to (25).Define F1 = 0 and F
k+1 =
ki=1 i(F
i + 1,i1). We see that
Fk+1 = F
k + k(F
k + 1,k1) = (k + 1)F
k + k1,k1
and thus
Fk = k1j=1
1,j1j
k1i=j+1
(i + 1).
From F1 = 0 and (26) we see that
Fk Fk = k1j=1
1,j1j
k1i=j+1
(i + 1) k1j=1
1,j1
k1i=j
(i + 1).
It follows that
rk rk rk + Fk+1 + rk
rk + 1,k + 2k
j=1
1,j1
ki=j
(c(i)piv + 1)(i,k + 1).
4.1 Matrix approximation
We apply the approximation (23) to the matrix A Rnnn with entries ai1i2i3 = (pi1 , pi2 , pi3) andpi =
1n (i
12), i = 1, . . . , n, generated by evaluating the smooth function
(x,y ,z) = (1 + x2 + y2 + z2)1/2.
From (24a) we obtain the representation
(Sk)i1i2i3 =k
=1
(w)i1
k=1
(u)i2(v)i3
with appropriate vectors u, v, and w, = 1, . . . , k, = 1, . . . , k. Here, k denotes the rank ofthe -th two-dimensinal approximation. Hence,
Sk2F =n
i1,i2,i3=1(Sk)2i1i2i3 =
k,=1
(w, w) ,
17
7/27/2019 Adaptive Cross Approximation (ACA)
19/27
where
:=
k=1
k=1
(u, u)(v, v),
can be exploited to evaluate
SkF
with linear complexity. Furthermore, we have that
Sk+1
Sk
2
F=
wk+122k+1,k+1, and condition (20) becomes
wr+12r+1,r+1 Sk+1Fin the case of third order tensors. Notice that the computation of Rk can be avoided by tracing backthe error as in (17) and (18).
The pivoting indices (i(k)1 , i
(k)2 , i
(k)3 ) can be obtained in many ways. The aim of this choice is to
reduce the amplification factors 1,k and c(k)piv. In the following numerical examples we have chosen
i()1 as the index of the maximum entry of the vector A1:n,i()2 ,i
()3
, where (i(+1)2 , i
(+1)3 ) is the index
of the maximum entry in modulus of the rank-k matrix UVH , U
Cnk and V
Cnk. The
maximum can be found with complexity O(k2 n) via the following procedure. Let uj and vj denotethe columns of U and V, respectively. In [13] it is pointed out that the n
2 n2 matrix
C :=k
j=1
diag(uj) diag(vj)
has the eigenvalues (UVH )ij for the eigenvectors ei ej , i, j = 1, . . . , n. Hence, the maximum entry
can be computed, for instance, by vector iteration. Here, the problem arises that
C(x y) =k
j=1
(diag(uj)x) (diag(vj)y),
i.e. the rank increases from step to step. In order to avoid this, C(x y) Cnn is truncated torank-1 via the singular value decomposition. The latter can be computed with O(k2 n) operations;see [4].
Hence, the complexity of the matrix approximation algorithm is O(nk=1 k2 ), while the storagerequirement is O(nkS), where kS :=
k=1 k. Table 5 shows the number of steps k, the Kronecker
rank kS, and the CPU time required for constructing the approximation. The column labeled factorcontains the compression ratio (2kS + 1)/n
2.
= 103 = 104 = 105
n k kS factor time [s] k kS factor time [s] k kS factor time [s]
160 000 3 9 7 1010 0.6 3 11 9 1010 0.7 4 18 1 1009 1.5320 000 3 9 2 1010 1.5 3 11 2 1010 1.8 4 19 4 1010 4.0640 000 3 9 5 1011 3.8 3 10 5 1011 4.5 4 20 1 1010 10.6
1 280 000 3 9 1 1011 6.9 3 10 1 1011 9.0 4 20 3 1011 21.3
Table 5: ACA for third order tensors.
Table 6 shows the results obtained for the functions
(x,y ,z) = (xq + yq + zq)1/q
with q = 1, 2, which are singular for x = y = z = 0. The ranks increase compared with Table 5, butthe complexity is still linear with respect to n.
18
7/27/2019 Adaptive Cross Approximation (ACA)
20/27
q = 1 q = 2n k kS factor time [s] k kS factor time [s]
160 000 21 308 2 1008 127.1 32 706 6 1008 597.7320 000 17 214 4 1009 190.8 30 660 1 1008 1416.3640 000 26 405 2
1009 1426.2 31 667 3
1009 3653.2
1 280 000 24 421 5 1010 3102.5 28 574 7 1010 5756.9
Table 6: ACA for third order tensors and = 103.
5 Adaptive cross approximation of functions of four variables
The construction of approximations to functions
: W X Y Z C
of four variables w,x,y ,z can be done by applying ACA to the pairs (w, x) and (y, z):
Rk(w,x,y ,z) = Rk1(w,x,y ,z) Rk1(w,x,yk, zk) Rk1(wk, xk, y , z)Rk1(wk, xk, yk, zk)
.
Since this leads to bivariate functions, we approximate them using ACA again. Hence, in this sectionwe will investigate the recursion
Rk(w,x,y ,z) = Rk1(w,x,y ,z) Awx[Rk1|yk,zk ](w, x) Ayz [Rk1|wk,xk](y, z)Awx[Rk1|yk,zk ](wk, xk)(27)
for k = 1, 2, . . . and R0 = . The choice of (wk, xk, yk, zk) guarantees Awx[Rk1|yk,zk ](wk, xk) = 0.Here, Awx[f] and Ayz [g] denote ACA approximations of the bivariate functions f and g with rankk.Lemma 10. The approximating function Sk := Rk is of the form
Sk(w,x,y ,z) =k
=1
u(w, x)v(y, z) =k
||=1
k|i|=1
ifi(w,x,y ,z), (28)
where, i N4, || := maxj=1,...,4 j,
u(w, x) :=k
i,j=1
()
ij
R1(w, x()
i
, y, z)R1(w()
j
, x , y, z),
v(y, z) :=k
i,j=1
()ij R1(w, x, y , z
()i )R1(w, x, y
()j , z),
and
fi(w,x,y ,z) := (w, x(1)i1
, y1 , z1)(w(2)i2
, x , y2 , z2)(w3 , x3 , y , z(3)i3
)(w4 , x4 , y(4)i4
, z)
with points x(1)i1
, y(2)i2
, z(3)i3
, y(4)i4
and coefficients i, ()ij , and
()ij .
19
7/27/2019 Adaptive Cross Approximation (ACA)
21/27
Proof. We already know that
Awx[f](w, x) =k
i,j=1
ijf(w, xi)f(w
j, x)
with suitable coefficients ij and points xi, w
j depending on f. Hence, it is easy to see that S1 is of
the desired form. Assuming that the assertion is true for k 1, we obtain from
Sk1(wk, xk, y(k)j , z) =
k14=1
ki4=1
j4i4(w4 , x4 , y(4)i4
, z),
where
j4i4 :=k1
1,2,3=1k
i1,i2,i3=1i(wk, x
(1)i1
, y1 , z1)(w(2)i2
, xk, y2 , z2)(w3 , x3 , y(k)j , z
(3)i3
),
that
Rk1(wk, xk, y(k)j , z) = (wk, xk, y
(k)j , z) Sk1(wk, xk, y(k)j , z) =
k4=1
ki4=1
j4i4(w4 , x4 , y(4)i4
, z)
and similarly
Rk1(wk, xk, y , z(k)i ) =
k3=1
ki3=1
i3i3(w3 , x3 , y , z(3)i3
).
Hence
Ayz [Rk1|wk,xk ](y, z) =k
i,j=1
(k)ij Rk1(wk, xk, y , z
(k)i )Rk1(wk, xk, y
(k)j , z)
=k
3,4=1
ki3,i4=1
34i3i4(w3 , x3 , y , z(3)i3
)(w4 , x4 , y(4)i4
, z),
where 34i3i4 :=k
i,j=1 (k)ij i3i3j4i4 . Similarly
Awx[Rk1
|yk,zk ](w, x) =
k
1,2=1
k
i1,i2=1
12i1i2(w, x(1)i1
, y1 , z1)(w(2)i2
, x , y2 , z2),
where 12i1i2 :=k
i,j=1 (k)ij
i1i1
j2i2 , from which the assertion follows.
For four variables we obtain a similar result as Theorem 2 in the trivariate case. Here, in additionto the amplification factor 1,k the expression
c(k)piv := maxyY, zZ
|Ayz [Rk1|wk,xk ](y, z)||Awx[Rk1|yk,zk ](wk, xk)|
will enter the estimates.
20
7/27/2019 Adaptive Cross Approximation (ACA)
22/27
Theorem 3. Let > 0 be sufficiently small, and for j = 1, . . . , k let
supyY, zZ
|Rj1(wj, xj , y , z) Ayz [Rj1|wj,xj ](y, z)| , (29a)
supwW,xX|
Rj1(w,x,yj , zj)
Awx[Rj1
|yj ,zj ](w, x)
| . (29b)
Then for w W, x X, y Y, and z Z
|Rk(w,x,y ,z)| (1 + 1,k) max(,){(w,x), (wi,xi), i=1,...,k}
Ek[,],YZ + ck,
where
ck := 1,k + 2k
j=1
(1,j1 + 1)k
i=j
(c(i)piv + 1)(i,k + 1).
Proof. For fixed parameters y, z the recursion for Rk is of type (9), i.e. Rk(w,x,y ,z) = rk[y,z ](w, x),
if we choose
k(w, x) := Awx[rk1[yk,zk ]](w, x) = Awx[Rk1|yk ,zk](w, x), ak := Ayz [Rk1|wk,xk ](y, z).
Let rk be defined as in (1) with k(w, x) = rk1[yk,zk](w, x). From Lemma 7 we obtain
rk[y,z ] rk[y,z ],WX k
j=1
(k)j |rj1[y,z ](wj , xj) aj| 1,k,
because|rj1[y,z ](wj , xj) aj | = |Rj1(wj, xj , y , z) Ayz [Rj1|wj,xj ](y, z)| .
Let Fk := supy,z rk1[y,z ] rk1[y,z ],WX. Then from assumption (29b) we have that
k k,WX = rk1[yk ,zk] Awx[rk1[yk,zk ]],WX rk1[yk ,zk] rk1[yk,zk ] + rk1[yk,zk ] rk1[yk,zk ] + Fk + k,
where k := (1,k1 + 1). For small enough we may assume that
Fk + k 12|k(wk, xk)|. (30)
Then Lemma 6 proves the estimate
Fk+1 k
i=1
i(Fi + i) (31)
with
i :=supy,z |ri1[y,z ](wi, xi)|
|i(wi, xi)| (i,k + 1).
From |ri1[y,z ](wi, xi) Ayz [Ri1|wi,xi ](y, z)| Fi + i we obtain that
supy,z
|ri1[y,z ](wi, xi)
||i(wi, xi)| supy,z
|Ayz [Ri1
|wi,xi](y, z)
|+ Fi + i
|i(wi, xi)| Fi i 2c(i)
piv + 1
21
7/27/2019 Adaptive Cross Approximation (ACA)
23/27
due to (30). Similar to the proof of Theorem 2 from F1 = 0 and (31) we see that
Fk k1j=1
jj
k1i=j+1
(i + 1) k1j=1
j
k1i=j
(i + 1).
It follows that
rk,WX rk rk,WX + Fk+1 + rk,WX
rk,WX + 1,k + 2k
j=1
j
ki=j
(c(i)piv + 1)(i,k + 1).
Notice that that rk was estimated in Sect. 3.
5.1 Matrix approximation
We apply the algorithm (27) to a matrix A Rnnnn with entries ai1i2i3i4 = (pi1 , pi2 , pi3 , pi4)and pi =
1n (i 12), i = 1, . . . , n, generated by evaluating the smooth function
(w,x,y ,z) = (1 + w + x + y + z)1.
The stopping criterion (see (20))
Sk+1 SkF Sk+1Fcan be evaluated with linear complexity, because from (28) we obtain the matrix representation
(Sk)i1i2i3i4 =
k=1
k=1
(u)i1(v)i2 k
=1(u)i3(v)i4
with suitable vectors u, v, u
, and v
, which shows that
Sk2F =k
,=1
,
where
:=
k
=1
k
=1
(u, u)(v, v) and :=
k
=1
k
=1
(u, u)(v
, v
).
Furthermore, Sk+1 Sk2F = k+1,k+1 k+1,k+1. Also in the case of four dimensional arrays thecomputation of Rk can be avoided by tracing back the error as in (17) and (18).
The pivots (i()1 , i
()2 ) are chosen as the indices of the entry of maximum modulus in the rank- k
matrix UVT , while (i
(+1)3 , i
(+1)4 ) corresponds to the maximum entry in the rank-k
matrix U
(V
)
T,where U, V, U
, and V
consist of the columns u, v, u
, and v
, respectively. Both maxima
can be found with linear complexity using the technique from Sect. 4.1.Hence, the complexity of the matrix approximation algorithm is O(nk=1 k2 + (k)2) and the
storage required by the approximation is O(n(kS + kS)), where kS :=k
=1 k and kS :=
k=1 k
.
Table 7 shows the number of steps k, the ranks kS and kS, and the CPU time required for
constructing the approximation. The column labeled factor contains the compression ratio 2(kS+kS)/n
3.
22
7/27/2019 Adaptive Cross Approximation (ACA)
24/27
= 103 = 104 = 105
n k kS kS factor time [s] k kS k
S time [s] k kS k
S time [s]
160 000 4 19 13 2 1014 2.5 5 27 18 4.0 7 53 35 10.6320 000 4 19 13 2 1015 6.2 5 27 19 10.5 7 59 35 31.9640 000 4 19 13 2
1016 15.8 5 26 19 26.1 7 57 34 69.8
1 280 000 4 19 13 3 1017 31.5 5 28 19 55.6 7 55 35 142.3
Table 7: ACA for fourth order tensors.
References
[1] M.-B. A. Babaev. Best approximation by bilinear forms. Mat. Zametki, 46(2):2133, 158, 1989.
[2] M.-B. A. Babaev. Exact annihilators and their applications in approximation theory. Trans.Acad. Sci. Azerb. Ser. Phys.-Tech. Math. Sci., 20(1, Math. Mech.):1724, 233, 2000.
[3] M. Bebendorf. Approximation of boundary element matrices. Numer. Math., 86(4):565589,2000.
[4] M. Bebendorf. Hierarchical Matrices: A Means to Efficiently Solve Elliptic Boundary ValueProblems, volume 63 of Lecture Notes in Computational Science and Engineering (LNCSE).Springer, 2008. ISBN 978-3-540-77146-3.
[5] M. Bebendorf and R. Grzhibovskis. Accelerating Galerkin BEM for Linear Elasticity usingAdaptive Cross Approximation. Mathematical Methods in the Applied Sciences, 29:17211747,2006.
[6] M. Bebendorf and S. Rjasanow. Adaptive low-rank approximation of collocation matrices.
Computing, 70(1):124, 2003.
[7] Gregory Beylkin and Lucas Monzon. On approximation of functions by exponential sums. Appl.Comput. Harmon. Anal., 19(1):1748, 2005.
[8] D. Braess and W. Hackbusch. Approximation of 1/x by exponential sums in [1, ). IMA J.Numer. Anal., 25(4):685697, 2005.
[9] D. Braess and W. Hackbusch. On the efficient compuation of high-dimensional integrals andthe approximation by exponential sums. Technical Report 3, Max-Planck-Institute MiS, 2009.
[10] Hans-Joachim Bungartz and Michael Griebel. Sparse grids. Acta Numerica, 13:147269, 2004.
[11] J. Douglas Carroll and Jih-Jie Chang. Analysis of individual differences in multidimensional scal-ing via an n-way generalization of Eckart-Young decomposition. Psychometrika, 35(3):283319, 1970.
[12] G. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psycho-metrica, 1:211218, 1936.
[13] Mike Espig. Effiziente Bestapproximation mittels Summen von Elementartensoren in hohenDimensionen. PhD thesis, University of Leipzig, 2007.
[14] W. Hackbusch and S. Kuhn. A new scheme for the tensor representation. Technical Report 2,Max-Planck-Institute MiS, 2009.
23
7/27/2019 Adaptive Cross Approximation (ACA)
25/27
[15] Wassily Hoeffding. A class of statistics with asymptotically normal distribution. Ann. Math.Statistics, 19:293325, 1948.
[16] Ilghiz Ibraghimov. Application of the three-way decomposition for matrix compression. Numer.Linear Algebra Appl., 9(6-7):551565, 2002. Preconditioned robust iterative solution methods,
PRISM 01 (Nijmegen).
[17] B. N. Khoromskij. Structured rank-(r1, . . . , rd) decomposition of function-related tensors in Rd.
Comput. Methods Appl. Math., 6(2):194220 (electronic), 2006.
[18] Tamara G. Kolda. A counterexample to the possibility of an extension of the Eckart-Young low-rank approximation theorem for the orthogonal rank tensor decomposition. SIAM J. MatrixAnal. Appl., 24(3):762767 (electronic), 2003.
[19] Pieter M. Kroonenberg and Jan de Leeuw. Principal component analysis of three-mode data bymeans of alternating least squares algorithms. Psychometrika, 45(1):6997, 1980.
[20] I. V. Oseledets, D. V. Savostianov, and E. E. Tyrtyshnikov. Tucker dimensionality reduction ofthree-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl., 30(3):939956, 2008.
[21] V. V. Pospelov. Approximation of functions of several variables by products of functions of asingle variable. Akad. Nauk SSSR Inst. Prikl. Mat. Preprint, (32):75, 1978.
[22] Themistocles M. Rassias and Jaromr Simsa. Finite sums decompositions in mathematical anal-ysis. Pure and Applied Mathematics (New York). John Wiley & Sons Ltd., Chichester, 1995.A Wiley-Interscience Publication.
[23] Erhard Schmidt. Zur Theorie der linearen und nichtlinearen Integralgleichungen. Math. Ann.,63(4):433476, 1907.
[24] J. Schneider. Error estimates for two-dimensional Cross Approximation. Technical Report 5,Max-Planck-Institute MiS, 2009.
[25] Jaromr Simsa. The best L2-approximation by finite sums of functions with separable variables.Aequationes Math., 43(2-3):248263, 1992.
[26] Jos M. F. ten Berge, Jan de Leeuw, and Pieter M. Kroonenberg. Some additional resultson principal components analysis of three-mode data by means of alternating least squaresalgorithms. Psychometrika, 52(2):183191, 1987.
[27] Ledyard R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika,
31:279311, 1966.[28] Ch. Zenger. Sparse grids. In W. Hackbusch, editor, Parallel Algorithms for Partial Differential
Equations, volume 31 of Notes on Numerical Fluid Mechanics, pages 241251. Vieweg, 1991.
[29] Tong Zhang and Gene H. Golub. Rank-one approximation to high order tensors. SIAM J.Matrix Anal. Appl., 23(2):534550 (electronic), 2001.
24
7/27/2019 Adaptive Cross Approximation (ACA)
26/27
Bestellungen nimmt entgegen:
Sonderforschungsbereich 611der Universitt BonnPoppelsdorfer Allee 82D - 53115 Bonn
Telefon: 0228/73 4882Telefax: 0228/73 7864E-Mail: [email protected] http://www.sfb611.iam.uni-bonn.de/
Verzeichnis der erschienenen Preprints ab No. 430
430. Frehse, Jens; Mlek, Josef; Ruika, Michael: Large Data Existence Result for UnsteadyFlows of Inhomogeneous Heat-Conducting Incompressible Fluids
431. Croce, Roberto; Griebel, Michael; Schweitzer, Marc Alexander: Numerical Simulation ofBubble and Droplet Deformation by a Level Set Approach with Surface Tension inThree Dimensions
432. Frehse, Jens; Lbach, Dominique: Regularity Results for Three Dimensional Isotropic andKinematic Hardening Including Boundary Differentiability
433. Arguin, Louis-Pierre; Kistler, Nicola: Small Perturbations of a Spin Glass System
434. Bolthausen, Erwin; Kistler, Nicola: On a Nonhierarchical Version of the GeneralizedRandom Energy Model. II. Ultrametricity
435. Blum, Heribert; Frehse, Jens: Boundary Differentiability for the Solution to Hencky's Lawof Elastic Plastic Plane Stress
436. Albeverio, Sergio; Ayupov, Shavkat A.; Kudaybergenov, Karim K.; Nurjanov, Berdach O.:Local Derivations on Algebras of Measurable Operators
437. Bartels, Sren; Dolzmann, Georg; Nochetto, Ricardo H.: A Finite Element Scheme for theEvolution of Orientational Order in Fluid Membranes
438. Bartels, Sren: Numerical Analysis of a Finite Element Scheme for the Approximation ofHarmonic Maps into Surfaces
439. Bartels, Sren; Mller, Rdiger: Error Controlled Local Resolution of Evolving Interfacesfor Generalized Cahn-Hilliard Equations
440. Bock, Martin; Tyagi, Amit Kumar; Kreft, Jan-Ulrich; Alt, Wolfgang: Generalized VoronoiTessellation as a Model of Two-dimensional Cell Tissue Dynamics
441. Frehse, Jens; Specovius-Neugebauer, Maria: Existence of Hlder Continuous YoungMeasure Solutions to Coercive Non-Monotone Parabolic Systems in Two SpaceDimensions
442. Kurzke, Matthias; Spirn, Daniel: Quantitative Equipartition of the Ginzburg-Landau Energywith Applications
7/27/2019 Adaptive Cross Approximation (ACA)
27/27
443. Bulek, Miroslav; Frehse, Jens; Mlek, Josef: On Boundary Regularity for the Stress inProblems of Linearized Elasto-Plasticity
444. Otto, Felix; Ramos, Fabio: Universal Bounds for the Littlewood-Paley First-OrderMoments of the 3D Navier-Stokes Equations
445. Frehse, Jens; Specovius-Neugebauer, Maria: Existence of Regular Solutions to a Classof Parabolic Systems in Two Space Dimensions with Critical Growth Behaviour
446. Bartels, Sren; Mller, Rdiger: Optimal and Robust A Posteriori Error Estimates in
L
(L2) for the Approximation of Allen-Cahn Equations Past Singularities
447. Bartels, Sren; Mller, Rdiger; Ortner, Christoph: Robust A Priori and A Posteriori ErrorAnalysis for the Approximation of Allen-Cahn and Ginzburg-Landau EquationsPast Topological Changes
448. Gloria, Antoine; Otto, Felix: An Optimal Variance Estimate in Stochastic Homogenizationof Discrete Elliptic Equations
449. Kurzke, Matthias; Melcher, Christof; Moser, Roger; Spirn, Daniel: Ginzburg-LandauVortices Driven by the Landau-Lifshitz-Gilbert Equation
450. Kurzke, Matthias; Spirn, Daniel: Gamma-Stability and Vortex Motion in Type IISuperconductors
451. Conti, Sergio; Dolzmann, Georg; Mller, Stefan: The DivCurl Lemma for Sequenceswhose Divergence and Curl are Compact in W
1,1
452. Barret, Florent; Bovier, Anton; Mlard, Sylvie: Uniform Estimates for MetastableTransition Times in a Coupled Bistable System
453. Bebendorf, Mario: Adaptive Cross Approximation of Multivariate Functions