46
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi, Italy Transition to Computational Multilinear Algebra Lecture 4. Tensor-Related SVDs

Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

  • Upload
    others

  • View
    15

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

From Matrix to Tensor:The Transition to Numerical Multilinear Algebra

Lecture 4. Tensor-Related Singular ValueDecompositions

Charles F. Van Loan

Cornell University

The Gene Golub SIAM Summer School 2010Selva di Fasano, Brindisi, Italy

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 2: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Where We Are

Lecture 1. Introduction to Tensor Computations

Lecture 2. Tensor Unfoldings

Lecture 3. Transpositions, Kronecker Products, Contractions

Lecture 4. Tensor-Related Singular Value Decompositions

Lecture 5. The CP Representation and Tensor Rank

Lecture 6. The Tucker Representation

Lecture 7. Other Decompositions and Nearness Problems

Lecture 8. Multilinear Rayleigh Quotients

Lecture 9. The Curse of Dimensionality

Lecture 10. Special Topics

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 3: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

What is this Lecture About?

Pushing the SVD Envelope

The SVD is a powerful tool for exposing the structure of a matrixand for capturing its essence through optimal, data-sparserepresentations:

A =

rank(A)∑k=1

σkukvTk ≈

r∑k=1

σkukvTk

For a tensor A, let’s try for something similar...

A =∑

whatever!

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 4: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

What is this Lecture About?

The Higher Order Singular Value Decomposition (HOSVD)

The HOSVD of a tensor A ∈ IRn1×···×nd involves computing thematrix SVDs of its modal unfoldings A(1), . . . ,A(d). This results ina representation of A as a sum of rank-1 tensors.

Before we present the HOSVD, we need to understand the mode-kmatrix product which can be thought of as a tensor-times-matrixproduct.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 5: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

What is this Lecture About?

The Kronecker Product Singular Value Decomposition (KPSVD)

The KPSVD of a matrix A represents A as an SVD-like sum ofKronecker products. Applied to an unfolding of A, it results in arepresentation of A as a sum of low rank tensors.

Before we present the KPSVD, we need to understand the nearestKronecker Product problem.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 6: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

Main Idea

A mode-k matrix product is a special contraction that involves amatrix and a tensor. A new tensor of the same order is obtained byapplying the matrix to each mode-k fiber of the tensor.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 7: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

A mode-1 example when the Tensor A is Second Order...[b11 b12 b13

b21 b22 b23

]=

[m11 m12

m21 m22

] [a11 a12 a13

a21 a22 a23

](The fibers of A are its columns.)

A mode-2 example when the Tensor A is Second Order...b11 b12

b21 b22

b31 b32

b41 b42

=

m11 m12 m13

m21 m22 m23

m31 m32 m33

m41 m42 m43

a11 a21

a12 a22

a13 a23

(The fibers of A are its rows.)

The matrix doesn’t have to be square.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 8: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

A Mode-2 Example When A ∈ IR4×3×2

b111 b211 b311 b411 b112 b212 b312 b412

b121 b221 b321 b421 b122 b222 b322 b422

b131 b231 b331 b431 b132 b232 b332 b432

b141 b241 b341 b441 b142 b242 b342 b442

b151 b251 b351 b451 b152 b252 b352 b452

=

m11 m12 m13

m21 m22 m23

m31 m32 m33

m41 m42 m43

m51 m52 m53

a111 a211 a311 a411 a112 a212 a312 a412

a121 a221 a321 a421 a122 a222 a322 a422

a131 a231 a331 a431 a132 a232 a332 a432

Note that (1) B ∈ IR4×5×2 and (2) B(2) = M · A(2).

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 9: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

In General...

If A ∈ IRn1×···×nd and M ∈ IRmk×nk , then

B(α1, . . . , αk−1, i , αk+1, . . . , αd)

=nk∑j=1

M(i , j) · A(α1, . . . , αk−1, j , αk+1, . . . , αd)

is the mode-k matrix product of M and A.

Multiply All the Mode-k Fibers by M...

B(k) = M · A(k)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 10: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

Notation

The mode-k matrix product of M and A is denoted by

B = A ×kM

Thus, if B = A ×kM, then B is defined by B(k) = M · A(k).

Order and Dimension

If A ∈ IRn1×···×nd , M ∈ IRmk×nk , and B = A ×kM, then

B = B(1:n1, . . . , 1:nk−1, 1:mk , 1:nk+1, . . . , 1:nk).

Thus, B has the same order as A.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 11: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Matlab Tensor Toolbox: Mode-k Matrix Product Using ttm

n = [2 5 4 7];

A = tenrand(n);

for k=1:4

M = randn(n(k),n(k));

% Compute the mode -k product of tensor A

% and matrix M to obtain tensor B...

B = ttm(A,M,k);

end

Problem 4.1. Write a function B = Myttm(A,M,k) that does the samething as ttm. Make effective use of tenmat.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 12: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Problem 4.2. If A ∈ IRn1×···×nd and v ∈ IRnk , then the mode-k vectorproduct is defined by

B(α1, . . . , αk−1, αk+1, . . . , αd)

=nkXj=1

v(j) · A(α1, . . . , αk−1, j , αk+1, . . . , αd)

The Tensor Toolbox function ttv (for tensor-times-vector) can be used tocompute this contraction. Note that

B = B(1:n1, . . . , 1:nk−1, 1:nk+1, . . . , 1:nk).

Thus, the order of B is one less than the order of A.

Write a Matlab function B = Myttv(M,A,k) that carries out the mode-kvector product. Use the fact that B is a reshaping of vTA(k).

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 13: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

Property 1. (Reshaping as a Matrix-Vector Product)

If A ∈ IRn1×···×nd , M ∈ IRmk×nk , and

B = A ×kM

then

vec(B) =(Ind

⊗ · · · ⊗ Ink+1⊗M ⊗ Ink−1

⊗ · · · ⊗ In1

)· vec(A)

=(Ink+1···nd

⊗M ⊗ In1···nk−1

)· vec(A)

Problem 4.3. Prove this. Hint. Show that if p = [1:k−1 k+1:d 1] then

(A ×kM)[ p ] = A[ p ] ×1M.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 14: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

Property 2. (Successive Products in the Same Mode)

If A ∈ IRn1×···×nd and M1,M2 ∈ IRmk×nk , then

(A ×kM1) ×kM2 = A ×k(M1M2).

Problem 4.4. Prove this using Property 1 and the Kronecker product factthat (A⊗ B)(C ⊗ D) = (AC)⊗ (BD).

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 15: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Mode-k Matrix Product

Property 3. (Successive Products in Different Modes)

If A ∈ IRn1×···×nd , Mk ∈ IRmk×nk , Mj ∈ IRmj×nj , and k 6= j , then

(A ×kMk) ×jMj = (A ×j Mj) ×kMk

Problem 4.5. Prove this using Property 1 and the Kronecker product factthat (A⊗ B)(C ⊗ D) = (AC)⊗ (BD).

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 16: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tucker Product

Definition

If A ∈ IRn1×···×nd and Mk ∈ IRmk×nk for k = 1:d , then

B = A ×1 M1 ×2 M2 · · · ×d Md

is a Tucker product of A and M1, . . . ,Md .

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 17: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Matlab Tensor Toolbox: Tucker Products

function B = TuckerProd(A,M)

% A is a n(1)-by -...-n(d) tensor.

% M is a length -d cell array with

% M{k} an m(k)-by -n(k) matrix.

% B is an m(1)-by -...-by -m(d) tensor given by

% B = A x1 M{1} x2 M{2} ... xd M{d}

% where "xk" denotes mode -k matrix product.

B = A;

for k=1: length(A.size)

B = ttm(B,M{k},k);

end

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 18: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tucker Product Representation

Before...

Given A ∈ IRn1×···×nd and Mk ∈ IRmk×nk for k = 1:d , compute theTucker product

B = A ×1 M1 ×2 M2 · · · ×d Md

Now...

Given A ∈ IRn1×···×nd , find matrices Uk ∈ IRnk×rk for k = 1:d andtensor S ∈ IRr1×···×rd such that

A = S ×1 U1 ×2 U2 · · · ×d Ud

is an “illuminating” Tucker product representation of A.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 19: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tucker Product Representation

Multilinear Sum Formulation

Suppose A ∈ IRn1×···×nd and Uk ∈ IRnk×rk . If

A = S ×1 U1 ×2 U2 · · · ×d Ud

where S ∈ IRr1×···×rd , then

A(i) =r∑

j=1

S(j)U1(i1, j1) · · ·Ud(id , jd)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 20: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tucker Product Representation

Matrix-Vector Product Formulation

Suppose A ∈ IRn1×···×nd and Uk ∈ IRnk×rk . If

A = S ×1 U1 ×2 U2 · · · ×d Ud

where S ∈ IRr1×···×rd , then

vec(A) = (Ud ⊗ · · · ⊗ U1) · vec(S)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 21: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tucker Product Representation

Important Existence Situation

If A ∈ IRn1×···×nd and Uk ∈ IRnk×nk is nonsingular for k = 1:d ,then

A = S ×1 U1 ×2 U2 · · · ×d Ud

withS = A ×1 U−1

1 ×2 U−12 · · · ×d U−1

d .

We will refer to the Uk as the inverse factors and S as the coretensor.

Proof.

S ×1 U1 ×2 U2 ×3 U3 =“A ×1 U−1

1 ×2 U−12 ×3 U−1

3

”×1 U1 ×2 U2 ×3 U3

= A ×1 (U−11 U1) ×2 (U−1

2 U2) ×3 (U−13 U3) = A

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 22: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tucker Product Representation

Core Tensor Unfoldings

Suppose A ∈ IRn1×···×nd and Uk ∈ IRnk×nk is nonsingular fork = 1:d . If

A = S ×1 U1 ×2 U2 · · · ×d Ud

then the mode-k unfoldings of S satisfy

S(k) = U−1k A(k) (Ud ⊗ · · · ⊗ Uk+1 ⊗ Uk−1 ⊗ · · · ⊗ U1)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 23: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Higher-Order SVD (HOSVD)

Definition

Suppose A ∈ IRn1×···×nd and that its mode-k unfolding has SVD

A(k) = UkΣkV Tk

for k = 1:d . Its higher-order SVD is given by

A = S ×1 U1 ×2 U2 · · · ×d Ud

whereS = A ×1 UT

1 ×2 UT2 · · · ×d UT

d

A Tucker product representation where the inverse factor matrices are the

left singular vector matrices for the unfoldings A(1), . . . ,A(d).

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 24: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Matlab Tensor Toolbox: Computing the HOSVD

function [S,U] = HOSVD(A)

% A is an n(1)-by -...-by -n(d) tensor.

% U is a length -d cell array with the

% property that U{k} is the left singular

% vector matrix of A’s mode -k unfolding.

% S is an n(1)-by -...-by -n(d) tensor given by

% A x1 U{1} x2 U{2} ... xd U{d}

S = A;

for k=1: length(A.size)

C = tenmat(A,k);

[U{k},Sigma ,V] = svd(C.data);

S = ttm(S,U{k}’,k);

end

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 25: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Higher-Order SVD (HOSVD)

The HOSVD of a Matrix

If d = 2 then A is a matrix and the HOSVD is the SVD. Indeed, if

A = A(1) = U1Σ1VT1

AT = A(2) = U2Σ2VT2

then we can set U = U1 = V2 and V = U2 = V1. Note that

S = (A ×1 UT1 ) ×2 UT

2 = (UT1 A) ×2 U2 = UT

1 AV1 = Σ1.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 26: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Higher-Order SVD (HOSVD)

The Modal Unfoldings of the Core Tensor SIf A = S ×1 U1 ×2 U2 · · · ×d Ud is the HOSVD of A ∈ IRn1×···×nd ,then for k = 1:d

S(k) = UTk A(k) (Ud ⊗ · · · ⊗ Uk+1 ⊗ Uk−1 ⊗ · · · ⊗ U1)

The Core Tensor S Encodes Singular Value Information

Since UTk A(k)Vk = Σk is the SVD of A(k), we have

S(k) = ΣkV Tk (Ud ⊗ · · · ⊗ Uk+1 ⊗ Uk−1 ⊗ · · · ⊗ U1) .

It follows that the rows of S(k) are mutually orthogonal and thatthe singular values of A(k) are the 2-norms of these rows.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 27: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Higher-Order SVD (HOSVD)

The HOSVD as a Multilinear Sum

If A = S ×1 U1 ×2 U2 · · · ×d Ud is the HOSVD of A ∈ IRn1×···×nd ,then

A(i) =n∑

j=1

S(j)U1(i1, j1) · · ·Ud(id , jd)

The HOSVD as a Matrix-Vector Product

If A = S ×1 U1 ×2 U2 · · · ×d Ud is the HOSVD of A ∈ IRn1×···×nd ,then

vec(A) = (Ud ⊗ · · · ⊗ U1) · vec(S)

Note that Ud ⊗ · · · ⊗ U1 is orthogonal.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 28: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Problem 4.6. Suppose A = S ×1 U1 ×2 U2 · · · ×d Ud is the HOSVD ofA ∈ IRn1×···×nd and that r ≤ min{n1, . . . , nd}. Through Matlabexperimentation, what can you say about the approximation

A ≈ Sr ×1 U1(:, 1:r) ×2 U2(:, 1:r) · · · ×d Ud(:, 1:r)

where Sr = S(1:r , 1:r , . . . , 1:r)?

Problem 4.7. Formulate an HOQRP factorization for a tensorA ∈ IRn1×···×nd that is based on the QR-with-column-pivoting factorizations

A(k)Pk = QkRk

for k = 1:d . Does the core tensor have any special properties?

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 29: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

The Problem

Given the block matrix

A =

A11 · · · A1,c2

.... . .

...Ar2,1 · · · Ar2,c2

Ai2,j2 ∈ IRr1×c1

find B ∈ IRr2×c2 and C ∈ IRr1×c1 so that

φA(B,C ) = ‖ A− B ⊗ C ‖F

is minimized.

Looks like the nearest rank-1 matrix problem which has an SVD solution.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 30: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

The Tensor Connection

The block matrix

A =

A11 · · · A1,c2

.... . .

...Ar2,1 · · · Ar2,c2

Ai2,j2 ∈ IRr1×c1

corresponds to an order-4 tensor

Ai2,j2(i1, j1) ↔ A(i1, j1, i2, j2)

and so does M = (Mij) = B ⊗ C :

Mi2,j2(i1, j1) ↔ B(i1, j1)C(i2, j2)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 31: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

The Objective Function in Tensor Terms

φA(B,C ) = ‖ A− B ⊗ C ‖F

=

√√√√ r1∑i1=1

c1∑j1=1

r2∑i2=1

c2∑j2=1

A(i1, j1, i2, j2)− B(i2, j2)C(i1, j1)

We are trying to approximate an order-4 tensor with a pair of order-2tensors.

Analogous to approximating a matrix (an order-2 tensor) with a rank-1

matrix (a pair of order-1 tensors.)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 32: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

Reshaping the Objective Function (3-by-2 case)

∥∥∥∥∥∥∥∥∥∥∥∥

a11 a12 a13 a14

a21 a22 a23 a24

a31 a32 a33 a34

a41 a42 a43 a44

a51 a52 a53 a54

a61 a62 a63 a64

b11 b12

b21 b22

b31 b32

⊗[

c11 c12

c21 c22

]∥∥∥∥∥∥∥∥∥∥∥∥

F

=∥∥∥∥∥∥∥∥∥∥∥∥∥

a11 a21 a12 a22

a31 a41 a32 a42

a51 a61 a52 a62

a13 a23 a14 a24

a33 a43 a34 a44

a53 a63 a54 a64

b11

b21

b31

b12

b22

b32

[c11 c21 c12 c22

] ∥∥∥∥∥∥∥∥∥∥∥∥∥F

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 33: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

Minimizing the Objective Function (3-by-2 case)

It is a nearest rank-1 problem,

φA(B,C ) =

∥∥∥∥∥∥∥∥∥∥∥∥∥

a11 a21 a12 a22

a31 a41 a32 a42

a51 a61 a52 a62

a13 a23 a14 a24

a33 a43 a34 a44

a53 a63 a54 a64

b11

b21

b31

b12

b22

b32

[c11 c21 c12 c22

] ∥∥∥∥∥∥∥∥∥∥∥∥∥F

= ‖ A− vec(B)vec(C )T ‖F

with SVD solution:A = UΣV T

vec(B) =√

σ1U(:, 1)

vec(C ) =√

σ1V (:, 1)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 34: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

The “Tilde Matrix”

A =

a11 a12 a13 a14

a21 a22 a23 a24

a31 a32 a33 a34

a41 a42 a43 a44

a51 a52 a53 a54

a61 a62 a63 a64

=

A11 A12

A21 A22

A31 A32

implies

A =

a11 a21 a12 a22

a31 a41 a32 a42

a51 a61 a52 a62

a13 a23 a14 a24

a33 a43 a34 a44

a53 a63 a54 a64

=

vec(A11)T

vec(A21)T

vec(A31)T

vec(A12)T

vec(A22)T

vec(A32)T

.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 35: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Nearest Kronecker Product Problem

Solution via Lanczos SVD

Since only the big “singular value triple” (σ1, u1, v1) associatedwith A’s SVD is required, we can use Lanczos SVD. Note that if Ais sparse, then A is sparse.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 36: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Problem 4.8. Write a Matlab function

[B,C] = NearestKP(A,r1,c1,r2,c2)

that solves the nearest Kronecker product problem where A ∈ IRm×n withm = r1r2, n = c1c2 and A is regarded as an r2-by-c2 block matrix withr1-by-c1 blocks. Make effective use of Matlab’s sparse SVD solver svds.

Problem 4.9. Write a Matlab function

[B,C] = NearestTensor(A)

where A is an order-4 tensor, B is an order-2 tensor, and C is an order-2tensor such that

φA(B, C) =

sXi

|A(i)− B(i3, i4)C(i1, i2)|2

is minimized. Use tenmat to set up A and use svd.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 37: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Kronecker Product SVD (KPSVD)

Theorem

If

A =

A11 · · · A1,c2

.... . .

...Ar2,1 · · · Ar2,c2

Ai2,j2 ∈ IRr1×c1

then there exist U1, . . . ,UrKP∈ IRr2×c2 , V1, . . . ,VrKP

∈ IRr1×c1 , andscalars σ1 ≥ · · · ≥ σrKP

> 0 such that

A =

rKP∑k=1

σkUk ⊗ Vk .

The sets {vec(Uk)} and {vec(Vk)} are orthonormal and rKP is theKronecker rank of A with respect to the chosen blocking.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 38: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Kronecker Product SVD (KPSVD)

Constructive Proof

Compute the SVD of A:

A = UΣV T =

rKP∑k=1

σkukvTk

and define the Uk and Vk by

vec(Uk) = uk

vec(Vk) = vk

for k = 1:rKP .

Uk = reshape(uk , r2, c2), Vk = reshape(vk , r1, c1)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 39: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Problem 4.10. Suppose

H =

»F11 ⊗ G11 F12 ⊗ G12

F21 ⊗ G21 F22 ⊗ G22

–where Fij ∈ IRm×m and Gij ∈ IRm×m. Regarded as a 2m-by-2m block matrix,what is the Kronecker rank of H.

Problem 4.11. Suppose

H =

264 F1

...Fp

375 ˆG1 · · · Gq

˜where the Fi and Gi are all m-by-m. Regarded as a p-by-q block matrix,what is the Kronecker rank of H?

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 40: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Kronecker Product SVD (KPSVD)

Nearest rank-r

If r ≤ rKP , then

Ar =r∑

k=1

σkUk ⊗ Vk

is the nearest matrix to A (in the Frobenius norm) that hasKronecker rank r .

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 41: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tensor Outer Product Operation

Applied to Order-1 Tensors...

A = b ◦ c b ∈ IRn1 , c ∈ IRn2

meansA(i1, i2) = b(i1)c(i2)

Note thattenmat(A, [1], [2]) = bcT

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 42: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tensor Outer Product Operation

Applied to Order-2 Tensors...

A = B ◦ C B ∈ IRn1×r1 , C ∈ IRn2×r2

meansA(i1, i2, i3, i4) = B(i1, i2)C (i3, i4)

Note thattenmat(A, [ 3 1 ], [ 4 2 ]) = B ⊗ C

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 43: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tensor Outer Product Operation

More General...

A = B ◦ C ◦ D B ∈ IRn1×r1 , C ∈ IRn2×r2 , D ∈ IRn3×r3

meansA(i1, i2, i3, i4) = B(i1, i2)C (i3, i4)D(i5, i6)

Note thattenmat(A, [ 5 3 1 ], [ 6 4 2 ]) = B ⊗ C ⊗ D

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 44: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tensor Outer Product Operation

The HOSVD as a Sum of Rank-1 Tensors

If A = S ×1 U1 ×2 U2 · · · ×d Ud is the HOSVD of A ∈ IRn1×···×nd ,then

A(i) =n∑

j=1

S(j)U1(i1, j1) · · ·Ud(id , jd)

implies

A =n∑

j=1

S(j) · U1(:, j1) ◦ · · · ◦ Ud(:, jd)

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 45: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

The Tensor Outer Product Operation

The KPSVD...

Iftenmat(A,[ 3 1 ], [ 4 2 ]) =

∑i

σi · Bi ⊗ Ci

thenA =

∑i

σi · Bi ◦ Ci

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs

Page 46: Lecture 4. Tensor-Related Singular Value Decompositions · Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products,

Summary of Lecture 4.

Key Words

The Mode-k Matrix Product is a contraction between a tensor anda matrix that produces another tensor.

The Higher Order SVD of a tensor A assembles the SVDs of A’smodal unfoldings.

The Nearest Kronecker product problem involves computing thefirst singular triple of a permuted version of the given matrix.

The Kronecker Product SVD characterizes a block matrix as a sumof Kronecker products. By applying it to an unfolding of a tensor A,an outer product expansion for A is obtained.

The outer product operation “◦” is a way of combining an order-d1

tensor and an order-d2 tensor to obtain an order-(d1 + d2) tensor.

⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 4. Tensor-Related SVDs