Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Research ArticleIncremental Tensor Principal Component Analysis forHandwritten Digit Recognition
Chang Liu12 Tao Yan12 WeiDong Zhao12 YongHong Liu12 Dan Li12
Feng Lin3 and JiLiu Zhou3
1 College of Information Science and Technology Chengdu University Chengdu 610106 China2 Key Laboratory of Pattern Recognition and Intelligent Information Processing Institutions of Higher Education of Sichuan ProvinceChengdu 610106 China
3 School of Computer Science Sichuan University Chengdu 610065 China
Correspondence should be addressed to YongHong Liu 284424241qqcom
Received 5 July 2013 Revised 21 September 2013 Accepted 22 September 2013 Published 30 January 2014
Academic Editor Praveen Agarwal
Copyright copy 2014 C Liu et al This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
To overcome the shortcomings of traditional dimensionality reduction algorithms incremental tensor principal componentanalysis (ITPCA) based on updated-SVD technique algorithm is proposed in this paperThis paper proves the relationship betweenPCA 2DPCA MPCA and the graph embedding framework theoretically and derives the incremental learning procedure to addsingle sample and multiple samples in detail The experiments on handwritten digit recognition have demonstrated that ITPCAhas achieved better recognition performance than that of vector-based principal component analysis (PCA) incremental principalcomponent analysis (IPCA) and multilinear principal component analysis (MPCA) algorithms At the same time ITPCA also haslower time and space complexity
1 Introduction
Pattern recognition and computer vision require processinga large amount of multi-dimensional data such as imageand video data Until now a large number of dimensionalityreduction algorithms have been investigated These algo-rithms project the whole data into a low-dimensional spaceand construct new features by analyzing the statistical rela-tionship hidden in the data set The new features often givegood information or hints about the datarsquos intrinsic structureAs a classical dimensionality reduction algorithm principalcomponent analysis has been applied in various applicationswidely
Traditional dimensionality reduction algorithms gener-ally transform each multi-dimensional data into a vector byconcatenating rows which is called Vectorization Such kindof the vectorization operation has largely increased the com-putational cost of data analysis and seriously destroys theintrinsic tensor structure of high-order data Consequentlytensor dimensionality reduction algorithms are developed
based on tensor algebra [1ndash10] Reference [10] has sum-marized existing multilinear subspace learning algorithmsfor tensor data Reference [11] has generalized principalcomponent analysis into tensor space and presented multi-linear principal component analysis (MPCA) Reference [12]has proposed the graph embedding framework to unify alldimensionality reduction algorithms
Furthermore traditional dimensionality reduction algo-rithms generally employ off-line learning to deal with newadded samples which aggravates the computational costTo address this problem on-line learning algorithms areproposed [13 14] In particular reference [15] has developedincremental principal component analysis (IPCA) based onupdated-SVD technique But most on-line learning algo-rithms focus on vector-basedmethods only a limited numberof works study incremental learning in tensor space [16ndash18]
To improve the incremental learning in tensor spacethis paper presents incremental tensor principal compo-nent analysis (ITPCA) based on updated-SVD techniquecombining tensor representation with incremental learning
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 819758 10 pageshttpdxdoiorg1011552014819758
2 Mathematical Problems in Engineering
This paper proves the relationship between PCA 2DPCAMPCA and the graph embedding framework theoreticallyand derives the incremental learning procedure to add singlesample and multiple samples in detail The experiments onhandwritten digit recognition have demonstrated that ITPCAhas achieved better performance than vector-based incre-mental principal component analysis (IPCA) andmultilinearprincipal component analysis (MPCA) algorithms At thesame time ITPCA also has lower time and space complexitythan MPCA
2 Tensor Principal Component Analysis
In this section we will employ tensor representation toexpress high-dimensional image data Consequently a high-dimensional image dataset can be expressed as a tensordataset 119883 = 119883
1 119883
119872 where 119883
119894isin R1198681timessdotsdotsdottimes119868119873 is an 119873
dimensional tensor and 119872 is the number of samples in thedataset Based on the representation the following definitionsare introduced
Definition 1 For tensor dataset 119883 the mean tensor is definedas follows
119883 =1
119872
119872
sum
119894=1
119883119894isin R1198681timessdotsdotsdottimes119868
119873 (1)
Definition 2 The unfolding matrix of the mean tensor alongthe 119899th dimension is called the mode-119899 mean matrix and isdefined as follows
119883(119899)
=1
119872
119872
sum
119894=1
119883(119899)
119894isin R119868119899timesprod119873
119894=1
119894 = 119899
119868119894
(2)
Definition 3 For tensor dataset 119883 the total scatter tensor isdefined as follows
Ψ119883
=
119872
sum
119898=1
10038171003817100381710038171003817119883119898
minus 11988310038171003817100381710038171003817
2
(3)
where 119860 is the norm of the tensor
Definition 4 For tensor dataset 119883 the mode-119899 total scattermatrix is defined as follows
119862(119899)
=
119872
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(4)
where119883(119899) is themode-119899meanmatrix and119883
(119899)
119894is themode-119899
unfolding matrix of tensor 119883119894
Tensor PCA is introduced in [11 19]The target is to com-pute 119873 orthogonal projective matrices 119880
(119899)isin R119868119899times119875119899 119899 =
1 119873 to maximize the total scatter tensor of the projectedlow-dimensional feature as follows
119891 119880(119899)
119899 = 1 119873 = argmax119880(119899)
Ψ119910
= argmax119880(119899)
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
(5)
where 119884119898
= 119883119898
times1119880(1)119879
times2119880(2)119879
times sdot sdot sdot119899minus1
119880(119899minus1)
119879
times
119899+1119880(119899+1)
119879
times sdot sdot sdot times119873
119880(119873)119879
Since it is difficult to solve 119873 orthogonal projective mat-
rices simultaneously an iterative procedure is employed toapproximately compute these119873 orthogonal projectivematri-ces Generally since it is assumed that the projective matrices119880(1)
119880(119899minus1)
119880(119899+1)
119880(119873)
are known we can solve thefollowing optimized problem to obtain 119880
(119899)
argmax119872
sum
119898=1
(119862(119899)
119898119862(119899)119879
119898) (6)
where 119862119898
= (119883119898
minus 119883) times1119880(1)119879
times2119880(2)119879
times sdot sdot sdot119899minus1
119880(119899minus1)
119879
times
119899+1119880(119899+1)
119879
times sdot sdot sdot times119873
119880(119873)119879
and 119862(119899)
119898is the mode-119899 unfolding
matrix of tensor 119862119898
According to the above analysis it is easy to derive thefollowing theorems
Theorem 5 (see [11]) For the order of tensor data 119899 = 1 thatis for the first-order tensor the objective function of MPCA isequal to that of PCA
Proof For the first-order tensor 119883119898
isin R119868times1 is a vector then(6) is119872
sum
119898=1
(119862(119899)
119898119862(119899)119879
119898) =
119872
sum
119898=1
(119880119879
(119883119898
minus 119883) (119883119898
minus 119883)119879
119880)
(7)
So MPCA for first-order tensor is equal to vector-basedPCA
Theorem 6 (see [11]) For the order of tensor data 119899 = 2 thatis for the second-order tensor the objective function of MPCAis equal to that of 2DsPCA
Proof For the second-order tensor119883119898
isin R1198681times1198682 is amatrix itis needed to solve two projective matrices 119880
(1) and 119880(2) then
(5) becomes119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
100381710038171003817100381710038171003817119880(1)119879
(119883119898
minus 119883) 119880(2)
100381710038171003817100381710038171003817
2
(8)
The above equation exactly is the objective function ofB2DPCA (bidirectional 2DPCA) [20ndash22] Letting 119880
(2)= 119868
the projective matrix 119880(1) is solved In this case the objective
function is119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
100381710038171003817100381710038171003817119880(1)119879
(119883119898
minus 119883) 119868100381710038171003817100381710038171003817
2
(9)
Then the above equation is simplified into the objective func-tion of row 2DPCA [23 24] Similarly letting 119880
(1)= 119868 the
projective matrix 119880(2) is solved the objective function is
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
10038171003817100381710038171003817119868119879
(119883119898
minus 119883) 119880(2)10038171003817100381710038171003817
2
(10)
Then the above equation is simplified into the objectivefunction of column 2DPCA [23 24]
Mathematical Problems in Engineering 3
Although vector-based and 2DPCA can be respected asthe special cases ofMPCAMPCAand 2DPCAemploy differ-ent techniques to solve the projective matrices 2DPCA car-ries out PCA to row data and column data respectively andMPCA employs an iterative solution to compute 119873 projec-tive matrices If it is supposed that the projective matrices119880(1)
119880(119899minus1)
119880(119899+1)
119880(119873)
are known then 119880(119899) is
solved Equation (6) can be expressed as follows
119862(119899)
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) times119896119880(119896)119879 100381610038161003816100381610038161003816
119873
119896=1
119896 = 119899
)
times ((119883(119899)
119894minus 119883(119899)
) times119896119880(119896)119879 100381610038161003816100381610038161003816
119873
119896=1
119896 = 119899
)
119879
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
) ((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
)
119879
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
119880(minus119899)119879
(119883(119899)
119894minus 119883(119899)
)
119879
)
(11)
where 119880(minus119899)
= 119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
Because
119880(minus119899)
119880(minus119899)119879
= (119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
)
times (119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
)119879
(12)
Based on the Kronecker product we can get the following
(119860 otimes 119861)119879
= 119860119879
otimes 119861119879
(119860 otimes 119861) (119862 otimes 119863) = 119860119862 otimes 119861119863
(13)
So
119880(minus119899)
119880(minus119899)119879
= 119880(119873)
119880(119873)119879
sdot sdot sdot otimes 119880(119899+1)
119880(119899+1)
119879
otimes 119880(119899minus1)
119880(119899minus1)
119879
sdot sdot sdot otimes 119880(1)
119880(1)119879
(14)
Since 119880(119894)
isin R119868119894times119868119894 is an orthogonal matrix 119880(119894)
119880(119894)119879
= 119868 119894 =
1 119873 119894 = 119899 and 119880(minus119899)
119880(minus119899)119879
= 119868If the dimensions of projective matrices do not change in
iterative procedure then
119862(119899)
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(15)
The above equation is equal to B2DPCA Because MPCAupdates projective matrices during iterative procedure it hasachieved better performance than 2DPCA
Theorem 7 MPCA can be unified into the graph embeddingframework [12]
Proof Based on the basic knowledge of tensor algebra we canget the following
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
10038171003817100381710038171003817vec(119884119898
) minus vec(119884)10038171003817100381710038171003817
2
(16)
Letting 119910119898
= vec(119884119898
) 120583 = vec(119884) we can get the following
119870
sum
119894=1
1003817100381710038171003817119910119894minus 120583
1003817100381710038171003817
2
=
119870
sum
119894=1
(119910119894minus 120583) (119910
119894minus 120583)119879
=
119870
sum
119894=1
(119910119894minus
1
119873
119873
sum
119895=1
119910119895) (119910
119894minus
1
119873
119873
sum
119895=1
119910119895)
119879
=
119870
sum
119894=1
(119910119894119910119879
119894minus
1
119873119910119894(
119870
sum
119895=1
119910119895)
119879
minus1
119873(
119870
sum
119895=1
119910119895) 119910119879
119894
+1
1198732(
119870
sum
119895=1
119910119895) (
119870
sum
119895=1
119910119895)
119879
)
=
119870
sum
119894=1
119910119894119910119879
119894minus
1
119873
119870
sum
119894=1
119910119894(
119870
sum
119895=1
119910119895)
119879
minus1
119873
119870
sum
119894=1
119870
sum
119895=1
119910119895119910119879
119894+
1
119873(
119870
sum
119895=1
119910119895) (
119870
sum
119895=1
119910119895)
119879
=
119870
sum
119894=1
119910119894119910119879
119894minus
1
119873
119870
sum
119894=1
119870
sum
119895=1
119910119894119910119879
119895
=
119870
sum
119894=1
(
119870
sum
119895=1
119882119894119895) 119910119894119910119879
119894minus
119870
sum
119894119895=1
119882119894119895119910119894119910119879
119895
=1
2
119870
sum
119894119895=1
119882119894119895
(119910119894119910119879
119894+ 119910119895119910119879
119895minus 119910119894119910119879
119895minus 119910119895119910119879
119894)
=1
2
119870
sum
119894119895=1
119882119894119895
(119910119894minus 119910119895) (119910119894minus 119910119895)119879
=1
2
119870
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119910119894minus 119910119895
10038171003817100381710038171003817
2
119865
(17)
4 Mathematical Problems in Engineering
where the similarity matrix 119882 isin R119872times119872 for any 119894 119895 we have119882119894119895
= 1119870 So (16) can be written as follows
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119884119894minus 119884119895
10038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
1003817100381710038171003817100381710038171003817119883119894times119899119880(119899)10038161003816100381610038161003816
119873
119899=1minus 119883119895
times119899119880(119899)10038161003816100381610038161003816
119873
119899=1
1003817100381710038171003817100381710038171003817
2
(18)
So the theorem is proved
3 Incremental Tensor PrincipalComponent Analysis
31 Incremental Learning Based on Single Sample Given ini-tial training samples 119883old = 119883
1 119883
119870 119883119894
isin R1198681timessdotsdotsdottimes119868119873 when a new sample 119883new isin R1198681timessdotsdotsdottimes119868119873 is added the trainingdataset becomes 119883 = 119883old 119883new
The mean tensor of initial samples is
119883old =1
119870
119870
sum
119894=1
119883119894 (19)
The covariance tensor of initial samples is
119862old =
119870
sum
119894=1
10038171003817100381710038171003817119883119894minus 119883old
10038171003817100381710038171003817
2
(20)
The mode-119899 covariance matrix of initial samples is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883old) (119883
(119899)
119894minus 119883old)
119879
(21)
When the new sample is added the mean tensor is
119883 =1
119870 + 1
119870+1
sum
119894=1
119883119894
=1
119870 + 1(
119870
sum
119894=1
119883119894+ 119883new) =
1
119870 + 1(119870119883old + 119883new)
(22)
The mode-119899 covariance matrix is expressed as follows
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+ (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(23)
where the first item of (23) is
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
times (119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
119879
=
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) + (119883(119899)
old minus 119883(119899)
)]
times [(119883(119899)
119894minus 119883(119899)
old)119879
+ (119883(119899)
old minus 119883(119899)
)
119879
]
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
= 119862(119899)
old + 119870 (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
times (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
= 119862(119899)
old +119870
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(24)
The second item of (23) is
(119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
) (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
=1198702
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(25)
Mathematical Problems in Engineering 5
Consequently the mode-119899 covariance matrix is updatedas follows
119862(119899)
= 119862(119899)
old +119870
119870 + 1(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(26)
Therefore when a new sample is added the projective ma-trices are solved according to the eigen decomposition on(26)
32 Incremental Learning Based on Multiple Samples Givenan initial training dataset119883old = 119883
1 119883
119870119883119894isin R1198681timessdotsdotsdottimes119868119873
when new samples are added into training dataset 119883new =
119883119870+1
119883119870+119879
then training dataset becomes into 119883 =
1198831 119883
119870 119883119870+1
119883119870+119879
In this case the mean tensoris updated into the following
119883 =1
119870 + 119879
119870+119879
sum
119894=1
119883119894=
1
119870 + 119879(
119870
sum
119894=1
119883119894+
119870+119879
sum
119894=119870+1
119883119894)
=1
119870 + 119879(119870119883old + 119879119883new)
(27)
Its mode-119899 covariance matrix is
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(28)
The first item in (28) is written as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
(29)
where119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
= 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)119879
minus 119870119883(119899)
old119883(119899)
old
119879
+ 119870119883(119899)
old119883(119899)119879
+ 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
119883(119899)
old
119879
+ 119870119883(119899)
119883(119899)
old
119879
= 0
119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
=1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(30)
Putting (30) into (29) then (29) becomes as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
old +1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(31)
The second item in (28) is written as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new + 119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(32)
where
119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
=1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879(33)
Then (32) becomes as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new +1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(34)
Putting (31) and (34) into (28) then we can get the following
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(35)
6 Mathematical Problems in Engineering
It is worthy to note that when new samples are available ithas no need to recompute themode-119899 covariancematrix of alltraining samplesWe just have to solve themode-119899 covariancematrix of new added samples and the difference betweenoriginal training samples and new added samples Howeverlike traditional incremental PCA eigen decomposition on119862(119899) has been repeated once new samples are added It is
certain that the repeated eigen decomposition on 119862(119899) will
cause heavy computational cost which is called ldquothe eigendecomposition updating problemrdquo For traditional vector-based incremental learning algorithm the updated-SVDtechnique is proposed in [25] to fit the eigen decompositionThis paper will introduce the updated-SVD technique intotensor-based incremental learning algorithm
For original samples the mode-119899 covariance matrix is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
= 119878(119899)
old119878(119899)119879
old
(36)
where 119878(119899)
old = [119883(119899)
1minus 119883(119899)
old 119883(119899)
119870minus 119883(119899)
old] According tothe eigen decomposition 119878
(119899)
old = svd(119880Σ119881119879) we can get the
following
119878(119899)
old119878(119899)119879
old = (119880Σ119881119879) (119880Σ119881
119879)119879
= 119880Σ119881119879119881Σ119880119879
= 119880Σ2119880119879
= eig (119862(119899)
old)
(37)
So it is easy to derive that the eigen-vector of 119862(119899)
old is the leftsingular vector of 119878
(119899)
old and the eigen-values correspond to theextraction of left singular values of 119878
(119899)
oldFor new samples the mode-119899 covariance matrix is
119862(119899)
new =
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
new) (119883(119899)
119894minus 119883(119899)
new)
119879
= 119878(119899)
new119878(119899)119879
new
(38)
where 119878(119899)
new = [119883(119899)
1minus119883(119899)
new 119883(119899)
119870minus119883(119899)
new] According to (35)the updated mode-119899 covariance matrix is defined as follows
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879
times (119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
= 119878(119899)
119878(119899)119879
(39)
where 119878(119899)
= [119878(119899)
old 119878(119899)
new radic119870119879(119870 + 119879)(119883(119899)
old minus 119883(119899)
new)] There-fore the updated projective matrix 119880
(119899) is the eigen-vectorscorresponding to the largest 119875
119899eigen-values of 119878
(119899) Themainsteps of incremental tensor principal component analysis arelisted as follows
input original samples and new added samplesoutput 119873 projective matrices
Step 1 Computing and saving
eig (119862(119899)
old) asymp [119880(119899)
119903 Σ(119899)
119903] (40)
Step 2 For 119894 = 1 119873
119861 = [
[
119878(119899)
new radic119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new)]
]
(41)
ProcessingQR decomposition for the following equation
QR = (119868 minus 119880(119899)
119903119880(119899)119879
119903) 119861 (42)
Processing SVD decomposition for the following equa-tion
svd[radicΣ(119899)
119903 119880(119899)119879
119903119861
0 119877
] = Σ119879 (43)
Computing the following equation
[119878(119899)
old 119861] asymp ([119880(119899)
119903 119876] ) Σ([
119881(119899)
1199030
0 119868] )
119879
(44)
Then the updated projective matrix is computed asfollows
119880(119899)
= [119880(119899)
119903 119876] (45)
end
Step 3 Repeating the above steps until the incrementallearning is finished
33TheComplexityAnalysis For tensor dataset119883 = 1198831
119883119872
119883119894
isin R1198681timessdotsdotsdottimes119868119873 without loss of generality it is assumedthat all dimensions are equal that is 119868
1= sdot sdot sdot = 119868
119873= 119868
Vector-based PCA converts all data into vector andconstructs a data matrix 119883 isin R119872times119863 119863 = 119868
119873 For vector-based PCA themain computational cost contains three partsthe computation of the covariance matrix the eigen decom-position of the covariance matrix and the computation oflow-dimensional features The time complexity to computecovariance matrix is 119874(119872119868
2119873) the time complexity of the
eigen decomposition is 119874(1198683119873
) and the time complexity tocompute low-dimensional features is 119874(119872119868
2119873+ 1198683119873
)Letting the iterative number be 1 the time complexity
to computing the mode-119899 covariance matrix for MPCAis 119874(119872119873119868
119873+1) the time complexity of eigen decomposi-
tion is 119874(1198731198683) and the time complexity to compute low-
dimensional features is 119874(119872119873119868119873+1
) so the total time com-plexity is 119874(119872119873119868
119873+1+ 1198731198683) Considering the time complex-
ity MPCA is superior to PCAFor ITPCA it is assumed that 119879 incremental datasets are
added MPCA has to recompute mode-119899 covariance matrixand conducts eigen decomposition for initial dataset andincremental dataset The more the training samples are thehigher time complexity they have If updated-SVD is used weonly need to compute QR decomposition and SVD decom-position The time complexity of QR decomposition is119874(119873119868
119873+1) The time complexity of the rank-119896 decomposi-
tion of the matrix with the size of (119903 + 119868) times (119903 + 119868119873minus1
) is
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Mathematical Problems in Engineering
This paper proves the relationship between PCA 2DPCAMPCA and the graph embedding framework theoreticallyand derives the incremental learning procedure to add singlesample and multiple samples in detail The experiments onhandwritten digit recognition have demonstrated that ITPCAhas achieved better performance than vector-based incre-mental principal component analysis (IPCA) andmultilinearprincipal component analysis (MPCA) algorithms At thesame time ITPCA also has lower time and space complexitythan MPCA
2 Tensor Principal Component Analysis
In this section we will employ tensor representation toexpress high-dimensional image data Consequently a high-dimensional image dataset can be expressed as a tensordataset 119883 = 119883
1 119883
119872 where 119883
119894isin R1198681timessdotsdotsdottimes119868119873 is an 119873
dimensional tensor and 119872 is the number of samples in thedataset Based on the representation the following definitionsare introduced
Definition 1 For tensor dataset 119883 the mean tensor is definedas follows
119883 =1
119872
119872
sum
119894=1
119883119894isin R1198681timessdotsdotsdottimes119868
119873 (1)
Definition 2 The unfolding matrix of the mean tensor alongthe 119899th dimension is called the mode-119899 mean matrix and isdefined as follows
119883(119899)
=1
119872
119872
sum
119894=1
119883(119899)
119894isin R119868119899timesprod119873
119894=1
119894 = 119899
119868119894
(2)
Definition 3 For tensor dataset 119883 the total scatter tensor isdefined as follows
Ψ119883
=
119872
sum
119898=1
10038171003817100381710038171003817119883119898
minus 11988310038171003817100381710038171003817
2
(3)
where 119860 is the norm of the tensor
Definition 4 For tensor dataset 119883 the mode-119899 total scattermatrix is defined as follows
119862(119899)
=
119872
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(4)
where119883(119899) is themode-119899meanmatrix and119883
(119899)
119894is themode-119899
unfolding matrix of tensor 119883119894
Tensor PCA is introduced in [11 19]The target is to com-pute 119873 orthogonal projective matrices 119880
(119899)isin R119868119899times119875119899 119899 =
1 119873 to maximize the total scatter tensor of the projectedlow-dimensional feature as follows
119891 119880(119899)
119899 = 1 119873 = argmax119880(119899)
Ψ119910
= argmax119880(119899)
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
(5)
where 119884119898
= 119883119898
times1119880(1)119879
times2119880(2)119879
times sdot sdot sdot119899minus1
119880(119899minus1)
119879
times
119899+1119880(119899+1)
119879
times sdot sdot sdot times119873
119880(119873)119879
Since it is difficult to solve 119873 orthogonal projective mat-
rices simultaneously an iterative procedure is employed toapproximately compute these119873 orthogonal projectivematri-ces Generally since it is assumed that the projective matrices119880(1)
119880(119899minus1)
119880(119899+1)
119880(119873)
are known we can solve thefollowing optimized problem to obtain 119880
(119899)
argmax119872
sum
119898=1
(119862(119899)
119898119862(119899)119879
119898) (6)
where 119862119898
= (119883119898
minus 119883) times1119880(1)119879
times2119880(2)119879
times sdot sdot sdot119899minus1
119880(119899minus1)
119879
times
119899+1119880(119899+1)
119879
times sdot sdot sdot times119873
119880(119873)119879
and 119862(119899)
119898is the mode-119899 unfolding
matrix of tensor 119862119898
According to the above analysis it is easy to derive thefollowing theorems
Theorem 5 (see [11]) For the order of tensor data 119899 = 1 thatis for the first-order tensor the objective function of MPCA isequal to that of PCA
Proof For the first-order tensor 119883119898
isin R119868times1 is a vector then(6) is119872
sum
119898=1
(119862(119899)
119898119862(119899)119879
119898) =
119872
sum
119898=1
(119880119879
(119883119898
minus 119883) (119883119898
minus 119883)119879
119880)
(7)
So MPCA for first-order tensor is equal to vector-basedPCA
Theorem 6 (see [11]) For the order of tensor data 119899 = 2 thatis for the second-order tensor the objective function of MPCAis equal to that of 2DsPCA
Proof For the second-order tensor119883119898
isin R1198681times1198682 is amatrix itis needed to solve two projective matrices 119880
(1) and 119880(2) then
(5) becomes119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
100381710038171003817100381710038171003817119880(1)119879
(119883119898
minus 119883) 119880(2)
100381710038171003817100381710038171003817
2
(8)
The above equation exactly is the objective function ofB2DPCA (bidirectional 2DPCA) [20ndash22] Letting 119880
(2)= 119868
the projective matrix 119880(1) is solved In this case the objective
function is119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
100381710038171003817100381710038171003817119880(1)119879
(119883119898
minus 119883) 119868100381710038171003817100381710038171003817
2
(9)
Then the above equation is simplified into the objective func-tion of row 2DPCA [23 24] Similarly letting 119880
(1)= 119868 the
projective matrix 119880(2) is solved the objective function is
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
10038171003817100381710038171003817119868119879
(119883119898
minus 119883) 119880(2)10038171003817100381710038171003817
2
(10)
Then the above equation is simplified into the objectivefunction of column 2DPCA [23 24]
Mathematical Problems in Engineering 3
Although vector-based and 2DPCA can be respected asthe special cases ofMPCAMPCAand 2DPCAemploy differ-ent techniques to solve the projective matrices 2DPCA car-ries out PCA to row data and column data respectively andMPCA employs an iterative solution to compute 119873 projec-tive matrices If it is supposed that the projective matrices119880(1)
119880(119899minus1)
119880(119899+1)
119880(119873)
are known then 119880(119899) is
solved Equation (6) can be expressed as follows
119862(119899)
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) times119896119880(119896)119879 100381610038161003816100381610038161003816
119873
119896=1
119896 = 119899
)
times ((119883(119899)
119894minus 119883(119899)
) times119896119880(119896)119879 100381610038161003816100381610038161003816
119873
119896=1
119896 = 119899
)
119879
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
) ((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
)
119879
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
119880(minus119899)119879
(119883(119899)
119894minus 119883(119899)
)
119879
)
(11)
where 119880(minus119899)
= 119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
Because
119880(minus119899)
119880(minus119899)119879
= (119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
)
times (119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
)119879
(12)
Based on the Kronecker product we can get the following
(119860 otimes 119861)119879
= 119860119879
otimes 119861119879
(119860 otimes 119861) (119862 otimes 119863) = 119860119862 otimes 119861119863
(13)
So
119880(minus119899)
119880(minus119899)119879
= 119880(119873)
119880(119873)119879
sdot sdot sdot otimes 119880(119899+1)
119880(119899+1)
119879
otimes 119880(119899minus1)
119880(119899minus1)
119879
sdot sdot sdot otimes 119880(1)
119880(1)119879
(14)
Since 119880(119894)
isin R119868119894times119868119894 is an orthogonal matrix 119880(119894)
119880(119894)119879
= 119868 119894 =
1 119873 119894 = 119899 and 119880(minus119899)
119880(minus119899)119879
= 119868If the dimensions of projective matrices do not change in
iterative procedure then
119862(119899)
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(15)
The above equation is equal to B2DPCA Because MPCAupdates projective matrices during iterative procedure it hasachieved better performance than 2DPCA
Theorem 7 MPCA can be unified into the graph embeddingframework [12]
Proof Based on the basic knowledge of tensor algebra we canget the following
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
10038171003817100381710038171003817vec(119884119898
) minus vec(119884)10038171003817100381710038171003817
2
(16)
Letting 119910119898
= vec(119884119898
) 120583 = vec(119884) we can get the following
119870
sum
119894=1
1003817100381710038171003817119910119894minus 120583
1003817100381710038171003817
2
=
119870
sum
119894=1
(119910119894minus 120583) (119910
119894minus 120583)119879
=
119870
sum
119894=1
(119910119894minus
1
119873
119873
sum
119895=1
119910119895) (119910
119894minus
1
119873
119873
sum
119895=1
119910119895)
119879
=
119870
sum
119894=1
(119910119894119910119879
119894minus
1
119873119910119894(
119870
sum
119895=1
119910119895)
119879
minus1
119873(
119870
sum
119895=1
119910119895) 119910119879
119894
+1
1198732(
119870
sum
119895=1
119910119895) (
119870
sum
119895=1
119910119895)
119879
)
=
119870
sum
119894=1
119910119894119910119879
119894minus
1
119873
119870
sum
119894=1
119910119894(
119870
sum
119895=1
119910119895)
119879
minus1
119873
119870
sum
119894=1
119870
sum
119895=1
119910119895119910119879
119894+
1
119873(
119870
sum
119895=1
119910119895) (
119870
sum
119895=1
119910119895)
119879
=
119870
sum
119894=1
119910119894119910119879
119894minus
1
119873
119870
sum
119894=1
119870
sum
119895=1
119910119894119910119879
119895
=
119870
sum
119894=1
(
119870
sum
119895=1
119882119894119895) 119910119894119910119879
119894minus
119870
sum
119894119895=1
119882119894119895119910119894119910119879
119895
=1
2
119870
sum
119894119895=1
119882119894119895
(119910119894119910119879
119894+ 119910119895119910119879
119895minus 119910119894119910119879
119895minus 119910119895119910119879
119894)
=1
2
119870
sum
119894119895=1
119882119894119895
(119910119894minus 119910119895) (119910119894minus 119910119895)119879
=1
2
119870
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119910119894minus 119910119895
10038171003817100381710038171003817
2
119865
(17)
4 Mathematical Problems in Engineering
where the similarity matrix 119882 isin R119872times119872 for any 119894 119895 we have119882119894119895
= 1119870 So (16) can be written as follows
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119884119894minus 119884119895
10038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
1003817100381710038171003817100381710038171003817119883119894times119899119880(119899)10038161003816100381610038161003816
119873
119899=1minus 119883119895
times119899119880(119899)10038161003816100381610038161003816
119873
119899=1
1003817100381710038171003817100381710038171003817
2
(18)
So the theorem is proved
3 Incremental Tensor PrincipalComponent Analysis
31 Incremental Learning Based on Single Sample Given ini-tial training samples 119883old = 119883
1 119883
119870 119883119894
isin R1198681timessdotsdotsdottimes119868119873 when a new sample 119883new isin R1198681timessdotsdotsdottimes119868119873 is added the trainingdataset becomes 119883 = 119883old 119883new
The mean tensor of initial samples is
119883old =1
119870
119870
sum
119894=1
119883119894 (19)
The covariance tensor of initial samples is
119862old =
119870
sum
119894=1
10038171003817100381710038171003817119883119894minus 119883old
10038171003817100381710038171003817
2
(20)
The mode-119899 covariance matrix of initial samples is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883old) (119883
(119899)
119894minus 119883old)
119879
(21)
When the new sample is added the mean tensor is
119883 =1
119870 + 1
119870+1
sum
119894=1
119883119894
=1
119870 + 1(
119870
sum
119894=1
119883119894+ 119883new) =
1
119870 + 1(119870119883old + 119883new)
(22)
The mode-119899 covariance matrix is expressed as follows
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+ (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(23)
where the first item of (23) is
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
times (119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
119879
=
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) + (119883(119899)
old minus 119883(119899)
)]
times [(119883(119899)
119894minus 119883(119899)
old)119879
+ (119883(119899)
old minus 119883(119899)
)
119879
]
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
= 119862(119899)
old + 119870 (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
times (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
= 119862(119899)
old +119870
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(24)
The second item of (23) is
(119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
) (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
=1198702
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(25)
Mathematical Problems in Engineering 5
Consequently the mode-119899 covariance matrix is updatedas follows
119862(119899)
= 119862(119899)
old +119870
119870 + 1(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(26)
Therefore when a new sample is added the projective ma-trices are solved according to the eigen decomposition on(26)
32 Incremental Learning Based on Multiple Samples Givenan initial training dataset119883old = 119883
1 119883
119870119883119894isin R1198681timessdotsdotsdottimes119868119873
when new samples are added into training dataset 119883new =
119883119870+1
119883119870+119879
then training dataset becomes into 119883 =
1198831 119883
119870 119883119870+1
119883119870+119879
In this case the mean tensoris updated into the following
119883 =1
119870 + 119879
119870+119879
sum
119894=1
119883119894=
1
119870 + 119879(
119870
sum
119894=1
119883119894+
119870+119879
sum
119894=119870+1
119883119894)
=1
119870 + 119879(119870119883old + 119879119883new)
(27)
Its mode-119899 covariance matrix is
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(28)
The first item in (28) is written as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
(29)
where119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
= 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)119879
minus 119870119883(119899)
old119883(119899)
old
119879
+ 119870119883(119899)
old119883(119899)119879
+ 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
119883(119899)
old
119879
+ 119870119883(119899)
119883(119899)
old
119879
= 0
119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
=1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(30)
Putting (30) into (29) then (29) becomes as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
old +1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(31)
The second item in (28) is written as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new + 119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(32)
where
119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
=1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879(33)
Then (32) becomes as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new +1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(34)
Putting (31) and (34) into (28) then we can get the following
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(35)
6 Mathematical Problems in Engineering
It is worthy to note that when new samples are available ithas no need to recompute themode-119899 covariancematrix of alltraining samplesWe just have to solve themode-119899 covariancematrix of new added samples and the difference betweenoriginal training samples and new added samples Howeverlike traditional incremental PCA eigen decomposition on119862(119899) has been repeated once new samples are added It is
certain that the repeated eigen decomposition on 119862(119899) will
cause heavy computational cost which is called ldquothe eigendecomposition updating problemrdquo For traditional vector-based incremental learning algorithm the updated-SVDtechnique is proposed in [25] to fit the eigen decompositionThis paper will introduce the updated-SVD technique intotensor-based incremental learning algorithm
For original samples the mode-119899 covariance matrix is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
= 119878(119899)
old119878(119899)119879
old
(36)
where 119878(119899)
old = [119883(119899)
1minus 119883(119899)
old 119883(119899)
119870minus 119883(119899)
old] According tothe eigen decomposition 119878
(119899)
old = svd(119880Σ119881119879) we can get the
following
119878(119899)
old119878(119899)119879
old = (119880Σ119881119879) (119880Σ119881
119879)119879
= 119880Σ119881119879119881Σ119880119879
= 119880Σ2119880119879
= eig (119862(119899)
old)
(37)
So it is easy to derive that the eigen-vector of 119862(119899)
old is the leftsingular vector of 119878
(119899)
old and the eigen-values correspond to theextraction of left singular values of 119878
(119899)
oldFor new samples the mode-119899 covariance matrix is
119862(119899)
new =
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
new) (119883(119899)
119894minus 119883(119899)
new)
119879
= 119878(119899)
new119878(119899)119879
new
(38)
where 119878(119899)
new = [119883(119899)
1minus119883(119899)
new 119883(119899)
119870minus119883(119899)
new] According to (35)the updated mode-119899 covariance matrix is defined as follows
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879
times (119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
= 119878(119899)
119878(119899)119879
(39)
where 119878(119899)
= [119878(119899)
old 119878(119899)
new radic119870119879(119870 + 119879)(119883(119899)
old minus 119883(119899)
new)] There-fore the updated projective matrix 119880
(119899) is the eigen-vectorscorresponding to the largest 119875
119899eigen-values of 119878
(119899) Themainsteps of incremental tensor principal component analysis arelisted as follows
input original samples and new added samplesoutput 119873 projective matrices
Step 1 Computing and saving
eig (119862(119899)
old) asymp [119880(119899)
119903 Σ(119899)
119903] (40)
Step 2 For 119894 = 1 119873
119861 = [
[
119878(119899)
new radic119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new)]
]
(41)
ProcessingQR decomposition for the following equation
QR = (119868 minus 119880(119899)
119903119880(119899)119879
119903) 119861 (42)
Processing SVD decomposition for the following equa-tion
svd[radicΣ(119899)
119903 119880(119899)119879
119903119861
0 119877
] = Σ119879 (43)
Computing the following equation
[119878(119899)
old 119861] asymp ([119880(119899)
119903 119876] ) Σ([
119881(119899)
1199030
0 119868] )
119879
(44)
Then the updated projective matrix is computed asfollows
119880(119899)
= [119880(119899)
119903 119876] (45)
end
Step 3 Repeating the above steps until the incrementallearning is finished
33TheComplexityAnalysis For tensor dataset119883 = 1198831
119883119872
119883119894
isin R1198681timessdotsdotsdottimes119868119873 without loss of generality it is assumedthat all dimensions are equal that is 119868
1= sdot sdot sdot = 119868
119873= 119868
Vector-based PCA converts all data into vector andconstructs a data matrix 119883 isin R119872times119863 119863 = 119868
119873 For vector-based PCA themain computational cost contains three partsthe computation of the covariance matrix the eigen decom-position of the covariance matrix and the computation oflow-dimensional features The time complexity to computecovariance matrix is 119874(119872119868
2119873) the time complexity of the
eigen decomposition is 119874(1198683119873
) and the time complexity tocompute low-dimensional features is 119874(119872119868
2119873+ 1198683119873
)Letting the iterative number be 1 the time complexity
to computing the mode-119899 covariance matrix for MPCAis 119874(119872119873119868
119873+1) the time complexity of eigen decomposi-
tion is 119874(1198731198683) and the time complexity to compute low-
dimensional features is 119874(119872119873119868119873+1
) so the total time com-plexity is 119874(119872119873119868
119873+1+ 1198731198683) Considering the time complex-
ity MPCA is superior to PCAFor ITPCA it is assumed that 119879 incremental datasets are
added MPCA has to recompute mode-119899 covariance matrixand conducts eigen decomposition for initial dataset andincremental dataset The more the training samples are thehigher time complexity they have If updated-SVD is used weonly need to compute QR decomposition and SVD decom-position The time complexity of QR decomposition is119874(119873119868
119873+1) The time complexity of the rank-119896 decomposi-
tion of the matrix with the size of (119903 + 119868) times (119903 + 119868119873minus1
) is
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 3
Although vector-based and 2DPCA can be respected asthe special cases ofMPCAMPCAand 2DPCAemploy differ-ent techniques to solve the projective matrices 2DPCA car-ries out PCA to row data and column data respectively andMPCA employs an iterative solution to compute 119873 projec-tive matrices If it is supposed that the projective matrices119880(1)
119880(119899minus1)
119880(119899+1)
119880(119873)
are known then 119880(119899) is
solved Equation (6) can be expressed as follows
119862(119899)
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) times119896119880(119896)119879 100381610038161003816100381610038161003816
119873
119896=1
119896 = 119899
)
times ((119883(119899)
119894minus 119883(119899)
) times119896119880(119896)119879 100381610038161003816100381610038161003816
119873
119896=1
119896 = 119899
)
119879
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
) ((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
)
119879
=
119872
sum
119894=1
((119883(119899)
119894minus 119883(119899)
) 119880(minus119899)
119880(minus119899)119879
(119883(119899)
119894minus 119883(119899)
)
119879
)
(11)
where 119880(minus119899)
= 119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
Because
119880(minus119899)
119880(minus119899)119879
= (119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
)
times (119880(119873)
sdot sdot sdot otimes 119880(119899+1)
otimes 119880(119899minus1)
sdot sdot sdot otimes 119880(1)
)119879
(12)
Based on the Kronecker product we can get the following
(119860 otimes 119861)119879
= 119860119879
otimes 119861119879
(119860 otimes 119861) (119862 otimes 119863) = 119860119862 otimes 119861119863
(13)
So
119880(minus119899)
119880(minus119899)119879
= 119880(119873)
119880(119873)119879
sdot sdot sdot otimes 119880(119899+1)
119880(119899+1)
119879
otimes 119880(119899minus1)
119880(119899minus1)
119879
sdot sdot sdot otimes 119880(1)
119880(1)119879
(14)
Since 119880(119894)
isin R119868119894times119868119894 is an orthogonal matrix 119880(119894)
119880(119894)119879
= 119868 119894 =
1 119873 119894 = 119899 and 119880(minus119899)
119880(minus119899)119879
= 119868If the dimensions of projective matrices do not change in
iterative procedure then
119862(119899)
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(15)
The above equation is equal to B2DPCA Because MPCAupdates projective matrices during iterative procedure it hasachieved better performance than 2DPCA
Theorem 7 MPCA can be unified into the graph embeddingframework [12]
Proof Based on the basic knowledge of tensor algebra we canget the following
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=
119872
sum
119898=1
10038171003817100381710038171003817vec(119884119898
) minus vec(119884)10038171003817100381710038171003817
2
(16)
Letting 119910119898
= vec(119884119898
) 120583 = vec(119884) we can get the following
119870
sum
119894=1
1003817100381710038171003817119910119894minus 120583
1003817100381710038171003817
2
=
119870
sum
119894=1
(119910119894minus 120583) (119910
119894minus 120583)119879
=
119870
sum
119894=1
(119910119894minus
1
119873
119873
sum
119895=1
119910119895) (119910
119894minus
1
119873
119873
sum
119895=1
119910119895)
119879
=
119870
sum
119894=1
(119910119894119910119879
119894minus
1
119873119910119894(
119870
sum
119895=1
119910119895)
119879
minus1
119873(
119870
sum
119895=1
119910119895) 119910119879
119894
+1
1198732(
119870
sum
119895=1
119910119895) (
119870
sum
119895=1
119910119895)
119879
)
=
119870
sum
119894=1
119910119894119910119879
119894minus
1
119873
119870
sum
119894=1
119910119894(
119870
sum
119895=1
119910119895)
119879
minus1
119873
119870
sum
119894=1
119870
sum
119895=1
119910119895119910119879
119894+
1
119873(
119870
sum
119895=1
119910119895) (
119870
sum
119895=1
119910119895)
119879
=
119870
sum
119894=1
119910119894119910119879
119894minus
1
119873
119870
sum
119894=1
119870
sum
119895=1
119910119894119910119879
119895
=
119870
sum
119894=1
(
119870
sum
119895=1
119882119894119895) 119910119894119910119879
119894minus
119870
sum
119894119895=1
119882119894119895119910119894119910119879
119895
=1
2
119870
sum
119894119895=1
119882119894119895
(119910119894119910119879
119894+ 119910119895119910119879
119895minus 119910119894119910119879
119895minus 119910119895119910119879
119894)
=1
2
119870
sum
119894119895=1
119882119894119895
(119910119894minus 119910119895) (119910119894minus 119910119895)119879
=1
2
119870
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119910119894minus 119910119895
10038171003817100381710038171003817
2
119865
(17)
4 Mathematical Problems in Engineering
where the similarity matrix 119882 isin R119872times119872 for any 119894 119895 we have119882119894119895
= 1119870 So (16) can be written as follows
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119884119894minus 119884119895
10038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
1003817100381710038171003817100381710038171003817119883119894times119899119880(119899)10038161003816100381610038161003816
119873
119899=1minus 119883119895
times119899119880(119899)10038161003816100381610038161003816
119873
119899=1
1003817100381710038171003817100381710038171003817
2
(18)
So the theorem is proved
3 Incremental Tensor PrincipalComponent Analysis
31 Incremental Learning Based on Single Sample Given ini-tial training samples 119883old = 119883
1 119883
119870 119883119894
isin R1198681timessdotsdotsdottimes119868119873 when a new sample 119883new isin R1198681timessdotsdotsdottimes119868119873 is added the trainingdataset becomes 119883 = 119883old 119883new
The mean tensor of initial samples is
119883old =1
119870
119870
sum
119894=1
119883119894 (19)
The covariance tensor of initial samples is
119862old =
119870
sum
119894=1
10038171003817100381710038171003817119883119894minus 119883old
10038171003817100381710038171003817
2
(20)
The mode-119899 covariance matrix of initial samples is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883old) (119883
(119899)
119894minus 119883old)
119879
(21)
When the new sample is added the mean tensor is
119883 =1
119870 + 1
119870+1
sum
119894=1
119883119894
=1
119870 + 1(
119870
sum
119894=1
119883119894+ 119883new) =
1
119870 + 1(119870119883old + 119883new)
(22)
The mode-119899 covariance matrix is expressed as follows
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+ (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(23)
where the first item of (23) is
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
times (119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
119879
=
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) + (119883(119899)
old minus 119883(119899)
)]
times [(119883(119899)
119894minus 119883(119899)
old)119879
+ (119883(119899)
old minus 119883(119899)
)
119879
]
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
= 119862(119899)
old + 119870 (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
times (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
= 119862(119899)
old +119870
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(24)
The second item of (23) is
(119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
) (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
=1198702
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(25)
Mathematical Problems in Engineering 5
Consequently the mode-119899 covariance matrix is updatedas follows
119862(119899)
= 119862(119899)
old +119870
119870 + 1(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(26)
Therefore when a new sample is added the projective ma-trices are solved according to the eigen decomposition on(26)
32 Incremental Learning Based on Multiple Samples Givenan initial training dataset119883old = 119883
1 119883
119870119883119894isin R1198681timessdotsdotsdottimes119868119873
when new samples are added into training dataset 119883new =
119883119870+1
119883119870+119879
then training dataset becomes into 119883 =
1198831 119883
119870 119883119870+1
119883119870+119879
In this case the mean tensoris updated into the following
119883 =1
119870 + 119879
119870+119879
sum
119894=1
119883119894=
1
119870 + 119879(
119870
sum
119894=1
119883119894+
119870+119879
sum
119894=119870+1
119883119894)
=1
119870 + 119879(119870119883old + 119879119883new)
(27)
Its mode-119899 covariance matrix is
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(28)
The first item in (28) is written as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
(29)
where119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
= 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)119879
minus 119870119883(119899)
old119883(119899)
old
119879
+ 119870119883(119899)
old119883(119899)119879
+ 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
119883(119899)
old
119879
+ 119870119883(119899)
119883(119899)
old
119879
= 0
119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
=1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(30)
Putting (30) into (29) then (29) becomes as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
old +1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(31)
The second item in (28) is written as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new + 119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(32)
where
119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
=1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879(33)
Then (32) becomes as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new +1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(34)
Putting (31) and (34) into (28) then we can get the following
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(35)
6 Mathematical Problems in Engineering
It is worthy to note that when new samples are available ithas no need to recompute themode-119899 covariancematrix of alltraining samplesWe just have to solve themode-119899 covariancematrix of new added samples and the difference betweenoriginal training samples and new added samples Howeverlike traditional incremental PCA eigen decomposition on119862(119899) has been repeated once new samples are added It is
certain that the repeated eigen decomposition on 119862(119899) will
cause heavy computational cost which is called ldquothe eigendecomposition updating problemrdquo For traditional vector-based incremental learning algorithm the updated-SVDtechnique is proposed in [25] to fit the eigen decompositionThis paper will introduce the updated-SVD technique intotensor-based incremental learning algorithm
For original samples the mode-119899 covariance matrix is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
= 119878(119899)
old119878(119899)119879
old
(36)
where 119878(119899)
old = [119883(119899)
1minus 119883(119899)
old 119883(119899)
119870minus 119883(119899)
old] According tothe eigen decomposition 119878
(119899)
old = svd(119880Σ119881119879) we can get the
following
119878(119899)
old119878(119899)119879
old = (119880Σ119881119879) (119880Σ119881
119879)119879
= 119880Σ119881119879119881Σ119880119879
= 119880Σ2119880119879
= eig (119862(119899)
old)
(37)
So it is easy to derive that the eigen-vector of 119862(119899)
old is the leftsingular vector of 119878
(119899)
old and the eigen-values correspond to theextraction of left singular values of 119878
(119899)
oldFor new samples the mode-119899 covariance matrix is
119862(119899)
new =
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
new) (119883(119899)
119894minus 119883(119899)
new)
119879
= 119878(119899)
new119878(119899)119879
new
(38)
where 119878(119899)
new = [119883(119899)
1minus119883(119899)
new 119883(119899)
119870minus119883(119899)
new] According to (35)the updated mode-119899 covariance matrix is defined as follows
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879
times (119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
= 119878(119899)
119878(119899)119879
(39)
where 119878(119899)
= [119878(119899)
old 119878(119899)
new radic119870119879(119870 + 119879)(119883(119899)
old minus 119883(119899)
new)] There-fore the updated projective matrix 119880
(119899) is the eigen-vectorscorresponding to the largest 119875
119899eigen-values of 119878
(119899) Themainsteps of incremental tensor principal component analysis arelisted as follows
input original samples and new added samplesoutput 119873 projective matrices
Step 1 Computing and saving
eig (119862(119899)
old) asymp [119880(119899)
119903 Σ(119899)
119903] (40)
Step 2 For 119894 = 1 119873
119861 = [
[
119878(119899)
new radic119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new)]
]
(41)
ProcessingQR decomposition for the following equation
QR = (119868 minus 119880(119899)
119903119880(119899)119879
119903) 119861 (42)
Processing SVD decomposition for the following equa-tion
svd[radicΣ(119899)
119903 119880(119899)119879
119903119861
0 119877
] = Σ119879 (43)
Computing the following equation
[119878(119899)
old 119861] asymp ([119880(119899)
119903 119876] ) Σ([
119881(119899)
1199030
0 119868] )
119879
(44)
Then the updated projective matrix is computed asfollows
119880(119899)
= [119880(119899)
119903 119876] (45)
end
Step 3 Repeating the above steps until the incrementallearning is finished
33TheComplexityAnalysis For tensor dataset119883 = 1198831
119883119872
119883119894
isin R1198681timessdotsdotsdottimes119868119873 without loss of generality it is assumedthat all dimensions are equal that is 119868
1= sdot sdot sdot = 119868
119873= 119868
Vector-based PCA converts all data into vector andconstructs a data matrix 119883 isin R119872times119863 119863 = 119868
119873 For vector-based PCA themain computational cost contains three partsthe computation of the covariance matrix the eigen decom-position of the covariance matrix and the computation oflow-dimensional features The time complexity to computecovariance matrix is 119874(119872119868
2119873) the time complexity of the
eigen decomposition is 119874(1198683119873
) and the time complexity tocompute low-dimensional features is 119874(119872119868
2119873+ 1198683119873
)Letting the iterative number be 1 the time complexity
to computing the mode-119899 covariance matrix for MPCAis 119874(119872119873119868
119873+1) the time complexity of eigen decomposi-
tion is 119874(1198731198683) and the time complexity to compute low-
dimensional features is 119874(119872119873119868119873+1
) so the total time com-plexity is 119874(119872119873119868
119873+1+ 1198731198683) Considering the time complex-
ity MPCA is superior to PCAFor ITPCA it is assumed that 119879 incremental datasets are
added MPCA has to recompute mode-119899 covariance matrixand conducts eigen decomposition for initial dataset andincremental dataset The more the training samples are thehigher time complexity they have If updated-SVD is used weonly need to compute QR decomposition and SVD decom-position The time complexity of QR decomposition is119874(119873119868
119873+1) The time complexity of the rank-119896 decomposi-
tion of the matrix with the size of (119903 + 119868) times (119903 + 119868119873minus1
) is
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Mathematical Problems in Engineering
where the similarity matrix 119882 isin R119872times119872 for any 119894 119895 we have119882119894119895
= 1119870 So (16) can be written as follows
119872
sum
119898=1
10038171003817100381710038171003817119884119898
minus 11988410038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
10038171003817100381710038171003817119884119894minus 119884119895
10038171003817100381710038171003817
2
=1
2
119872
sum
119894119895=1
119882119894119895
1003817100381710038171003817100381710038171003817119883119894times119899119880(119899)10038161003816100381610038161003816
119873
119899=1minus 119883119895
times119899119880(119899)10038161003816100381610038161003816
119873
119899=1
1003817100381710038171003817100381710038171003817
2
(18)
So the theorem is proved
3 Incremental Tensor PrincipalComponent Analysis
31 Incremental Learning Based on Single Sample Given ini-tial training samples 119883old = 119883
1 119883
119870 119883119894
isin R1198681timessdotsdotsdottimes119868119873 when a new sample 119883new isin R1198681timessdotsdotsdottimes119868119873 is added the trainingdataset becomes 119883 = 119883old 119883new
The mean tensor of initial samples is
119883old =1
119870
119870
sum
119894=1
119883119894 (19)
The covariance tensor of initial samples is
119862old =
119870
sum
119894=1
10038171003817100381710038171003817119883119894minus 119883old
10038171003817100381710038171003817
2
(20)
The mode-119899 covariance matrix of initial samples is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883old) (119883
(119899)
119894minus 119883old)
119879
(21)
When the new sample is added the mean tensor is
119883 =1
119870 + 1
119870+1
sum
119894=1
119883119894
=1
119870 + 1(
119870
sum
119894=1
119883119894+ 119883new) =
1
119870 + 1(119870119883old + 119883new)
(22)
The mode-119899 covariance matrix is expressed as follows
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+ (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(23)
where the first item of (23) is
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
times (119883(119899)
119894minus 119883(119899)
old + 119883(119899)
old minus 119883(119899)
)
119879
=
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) + (119883(119899)
old minus 119883(119899)
)]
times [(119883(119899)
119894minus 119883(119899)
old)119879
+ (119883(119899)
old minus 119883(119899)
)
119879
]
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
= 119862(119899)
old + 119870 (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
times (119883(119899)
old minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
= 119862(119899)
old +119870
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(24)
The second item of (23) is
(119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
= (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
) (119883(119899)
new minus119870119883(119899)
old + 119883(119899)
new119870 + 1
)
119879
=1198702
(119870 + 1)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(25)
Mathematical Problems in Engineering 5
Consequently the mode-119899 covariance matrix is updatedas follows
119862(119899)
= 119862(119899)
old +119870
119870 + 1(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(26)
Therefore when a new sample is added the projective ma-trices are solved according to the eigen decomposition on(26)
32 Incremental Learning Based on Multiple Samples Givenan initial training dataset119883old = 119883
1 119883
119870119883119894isin R1198681timessdotsdotsdottimes119868119873
when new samples are added into training dataset 119883new =
119883119870+1
119883119870+119879
then training dataset becomes into 119883 =
1198831 119883
119870 119883119870+1
119883119870+119879
In this case the mean tensoris updated into the following
119883 =1
119870 + 119879
119870+119879
sum
119894=1
119883119894=
1
119870 + 119879(
119870
sum
119894=1
119883119894+
119870+119879
sum
119894=119870+1
119883119894)
=1
119870 + 119879(119870119883old + 119879119883new)
(27)
Its mode-119899 covariance matrix is
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(28)
The first item in (28) is written as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
(29)
where119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
= 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)119879
minus 119870119883(119899)
old119883(119899)
old
119879
+ 119870119883(119899)
old119883(119899)119879
+ 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
119883(119899)
old
119879
+ 119870119883(119899)
119883(119899)
old
119879
= 0
119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
=1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(30)
Putting (30) into (29) then (29) becomes as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
old +1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(31)
The second item in (28) is written as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new + 119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(32)
where
119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
=1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879(33)
Then (32) becomes as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new +1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(34)
Putting (31) and (34) into (28) then we can get the following
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(35)
6 Mathematical Problems in Engineering
It is worthy to note that when new samples are available ithas no need to recompute themode-119899 covariancematrix of alltraining samplesWe just have to solve themode-119899 covariancematrix of new added samples and the difference betweenoriginal training samples and new added samples Howeverlike traditional incremental PCA eigen decomposition on119862(119899) has been repeated once new samples are added It is
certain that the repeated eigen decomposition on 119862(119899) will
cause heavy computational cost which is called ldquothe eigendecomposition updating problemrdquo For traditional vector-based incremental learning algorithm the updated-SVDtechnique is proposed in [25] to fit the eigen decompositionThis paper will introduce the updated-SVD technique intotensor-based incremental learning algorithm
For original samples the mode-119899 covariance matrix is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
= 119878(119899)
old119878(119899)119879
old
(36)
where 119878(119899)
old = [119883(119899)
1minus 119883(119899)
old 119883(119899)
119870minus 119883(119899)
old] According tothe eigen decomposition 119878
(119899)
old = svd(119880Σ119881119879) we can get the
following
119878(119899)
old119878(119899)119879
old = (119880Σ119881119879) (119880Σ119881
119879)119879
= 119880Σ119881119879119881Σ119880119879
= 119880Σ2119880119879
= eig (119862(119899)
old)
(37)
So it is easy to derive that the eigen-vector of 119862(119899)
old is the leftsingular vector of 119878
(119899)
old and the eigen-values correspond to theextraction of left singular values of 119878
(119899)
oldFor new samples the mode-119899 covariance matrix is
119862(119899)
new =
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
new) (119883(119899)
119894minus 119883(119899)
new)
119879
= 119878(119899)
new119878(119899)119879
new
(38)
where 119878(119899)
new = [119883(119899)
1minus119883(119899)
new 119883(119899)
119870minus119883(119899)
new] According to (35)the updated mode-119899 covariance matrix is defined as follows
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879
times (119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
= 119878(119899)
119878(119899)119879
(39)
where 119878(119899)
= [119878(119899)
old 119878(119899)
new radic119870119879(119870 + 119879)(119883(119899)
old minus 119883(119899)
new)] There-fore the updated projective matrix 119880
(119899) is the eigen-vectorscorresponding to the largest 119875
119899eigen-values of 119878
(119899) Themainsteps of incremental tensor principal component analysis arelisted as follows
input original samples and new added samplesoutput 119873 projective matrices
Step 1 Computing and saving
eig (119862(119899)
old) asymp [119880(119899)
119903 Σ(119899)
119903] (40)
Step 2 For 119894 = 1 119873
119861 = [
[
119878(119899)
new radic119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new)]
]
(41)
ProcessingQR decomposition for the following equation
QR = (119868 minus 119880(119899)
119903119880(119899)119879
119903) 119861 (42)
Processing SVD decomposition for the following equa-tion
svd[radicΣ(119899)
119903 119880(119899)119879
119903119861
0 119877
] = Σ119879 (43)
Computing the following equation
[119878(119899)
old 119861] asymp ([119880(119899)
119903 119876] ) Σ([
119881(119899)
1199030
0 119868] )
119879
(44)
Then the updated projective matrix is computed asfollows
119880(119899)
= [119880(119899)
119903 119876] (45)
end
Step 3 Repeating the above steps until the incrementallearning is finished
33TheComplexityAnalysis For tensor dataset119883 = 1198831
119883119872
119883119894
isin R1198681timessdotsdotsdottimes119868119873 without loss of generality it is assumedthat all dimensions are equal that is 119868
1= sdot sdot sdot = 119868
119873= 119868
Vector-based PCA converts all data into vector andconstructs a data matrix 119883 isin R119872times119863 119863 = 119868
119873 For vector-based PCA themain computational cost contains three partsthe computation of the covariance matrix the eigen decom-position of the covariance matrix and the computation oflow-dimensional features The time complexity to computecovariance matrix is 119874(119872119868
2119873) the time complexity of the
eigen decomposition is 119874(1198683119873
) and the time complexity tocompute low-dimensional features is 119874(119872119868
2119873+ 1198683119873
)Letting the iterative number be 1 the time complexity
to computing the mode-119899 covariance matrix for MPCAis 119874(119872119873119868
119873+1) the time complexity of eigen decomposi-
tion is 119874(1198731198683) and the time complexity to compute low-
dimensional features is 119874(119872119873119868119873+1
) so the total time com-plexity is 119874(119872119873119868
119873+1+ 1198731198683) Considering the time complex-
ity MPCA is superior to PCAFor ITPCA it is assumed that 119879 incremental datasets are
added MPCA has to recompute mode-119899 covariance matrixand conducts eigen decomposition for initial dataset andincremental dataset The more the training samples are thehigher time complexity they have If updated-SVD is used weonly need to compute QR decomposition and SVD decom-position The time complexity of QR decomposition is119874(119873119868
119873+1) The time complexity of the rank-119896 decomposi-
tion of the matrix with the size of (119903 + 119868) times (119903 + 119868119873minus1
) is
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 5
Consequently the mode-119899 covariance matrix is updatedas follows
119862(119899)
= 119862(119899)
old +119870
119870 + 1(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(26)
Therefore when a new sample is added the projective ma-trices are solved according to the eigen decomposition on(26)
32 Incremental Learning Based on Multiple Samples Givenan initial training dataset119883old = 119883
1 119883
119870119883119894isin R1198681timessdotsdotsdottimes119868119873
when new samples are added into training dataset 119883new =
119883119870+1
119883119870+119879
then training dataset becomes into 119883 =
1198831 119883
119870 119883119870+1
119883119870+119879
In this case the mean tensoris updated into the following
119883 =1
119870 + 119879
119870+119879
sum
119894=1
119883119894=
1
119870 + 119879(
119870
sum
119894=1
119883119894+
119870+119879
sum
119894=119870+1
119883119894)
=1
119870 + 119879(119870119883old + 119879119883new)
(27)
Its mode-119899 covariance matrix is
119862(119899)
=
119870+119879
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
+
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
(28)
The first item in (28) is written as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
=
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
+ 119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
+
119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
(29)
where119870
sum
119894=1
[(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
old minus 119883(119899)
)
119879
+ (119883(119899)
old minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
old)119879
]
= 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)119879
minus 119870119883(119899)
old119883(119899)
old
119879
+ 119870119883(119899)
old119883(119899)119879
+ 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
old119883(119899)
old
119879
minus 119870119883(119899)
119883(119899)
old
119879
+ 119870119883(119899)
119883(119899)
old
119879
= 0
119870 (119883(119899)
old minus 119883(119899)
) (119883(119899)
old minus 119883(119899)
)
119879
=1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(30)
Putting (30) into (29) then (29) becomes as follows
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
old +1198701198792
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(31)
The second item in (28) is written as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new + 119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
(32)
where
119879 (119883(119899)
new minus 119883(119899)
) (119883(119899)
new minus 119883(119899)
)
119879
=1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879(33)
Then (32) becomes as follows119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
) (119883(119899)
119894minus 119883(119899)
)
119879
= 119862(119899)
new +1198702119879
(119870 + 119879)2
(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(34)
Putting (31) and (34) into (28) then we can get the following
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
(35)
6 Mathematical Problems in Engineering
It is worthy to note that when new samples are available ithas no need to recompute themode-119899 covariancematrix of alltraining samplesWe just have to solve themode-119899 covariancematrix of new added samples and the difference betweenoriginal training samples and new added samples Howeverlike traditional incremental PCA eigen decomposition on119862(119899) has been repeated once new samples are added It is
certain that the repeated eigen decomposition on 119862(119899) will
cause heavy computational cost which is called ldquothe eigendecomposition updating problemrdquo For traditional vector-based incremental learning algorithm the updated-SVDtechnique is proposed in [25] to fit the eigen decompositionThis paper will introduce the updated-SVD technique intotensor-based incremental learning algorithm
For original samples the mode-119899 covariance matrix is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
= 119878(119899)
old119878(119899)119879
old
(36)
where 119878(119899)
old = [119883(119899)
1minus 119883(119899)
old 119883(119899)
119870minus 119883(119899)
old] According tothe eigen decomposition 119878
(119899)
old = svd(119880Σ119881119879) we can get the
following
119878(119899)
old119878(119899)119879
old = (119880Σ119881119879) (119880Σ119881
119879)119879
= 119880Σ119881119879119881Σ119880119879
= 119880Σ2119880119879
= eig (119862(119899)
old)
(37)
So it is easy to derive that the eigen-vector of 119862(119899)
old is the leftsingular vector of 119878
(119899)
old and the eigen-values correspond to theextraction of left singular values of 119878
(119899)
oldFor new samples the mode-119899 covariance matrix is
119862(119899)
new =
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
new) (119883(119899)
119894minus 119883(119899)
new)
119879
= 119878(119899)
new119878(119899)119879
new
(38)
where 119878(119899)
new = [119883(119899)
1minus119883(119899)
new 119883(119899)
119870minus119883(119899)
new] According to (35)the updated mode-119899 covariance matrix is defined as follows
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879
times (119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
= 119878(119899)
119878(119899)119879
(39)
where 119878(119899)
= [119878(119899)
old 119878(119899)
new radic119870119879(119870 + 119879)(119883(119899)
old minus 119883(119899)
new)] There-fore the updated projective matrix 119880
(119899) is the eigen-vectorscorresponding to the largest 119875
119899eigen-values of 119878
(119899) Themainsteps of incremental tensor principal component analysis arelisted as follows
input original samples and new added samplesoutput 119873 projective matrices
Step 1 Computing and saving
eig (119862(119899)
old) asymp [119880(119899)
119903 Σ(119899)
119903] (40)
Step 2 For 119894 = 1 119873
119861 = [
[
119878(119899)
new radic119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new)]
]
(41)
ProcessingQR decomposition for the following equation
QR = (119868 minus 119880(119899)
119903119880(119899)119879
119903) 119861 (42)
Processing SVD decomposition for the following equa-tion
svd[radicΣ(119899)
119903 119880(119899)119879
119903119861
0 119877
] = Σ119879 (43)
Computing the following equation
[119878(119899)
old 119861] asymp ([119880(119899)
119903 119876] ) Σ([
119881(119899)
1199030
0 119868] )
119879
(44)
Then the updated projective matrix is computed asfollows
119880(119899)
= [119880(119899)
119903 119876] (45)
end
Step 3 Repeating the above steps until the incrementallearning is finished
33TheComplexityAnalysis For tensor dataset119883 = 1198831
119883119872
119883119894
isin R1198681timessdotsdotsdottimes119868119873 without loss of generality it is assumedthat all dimensions are equal that is 119868
1= sdot sdot sdot = 119868
119873= 119868
Vector-based PCA converts all data into vector andconstructs a data matrix 119883 isin R119872times119863 119863 = 119868
119873 For vector-based PCA themain computational cost contains three partsthe computation of the covariance matrix the eigen decom-position of the covariance matrix and the computation oflow-dimensional features The time complexity to computecovariance matrix is 119874(119872119868
2119873) the time complexity of the
eigen decomposition is 119874(1198683119873
) and the time complexity tocompute low-dimensional features is 119874(119872119868
2119873+ 1198683119873
)Letting the iterative number be 1 the time complexity
to computing the mode-119899 covariance matrix for MPCAis 119874(119872119873119868
119873+1) the time complexity of eigen decomposi-
tion is 119874(1198731198683) and the time complexity to compute low-
dimensional features is 119874(119872119873119868119873+1
) so the total time com-plexity is 119874(119872119873119868
119873+1+ 1198731198683) Considering the time complex-
ity MPCA is superior to PCAFor ITPCA it is assumed that 119879 incremental datasets are
added MPCA has to recompute mode-119899 covariance matrixand conducts eigen decomposition for initial dataset andincremental dataset The more the training samples are thehigher time complexity they have If updated-SVD is used weonly need to compute QR decomposition and SVD decom-position The time complexity of QR decomposition is119874(119873119868
119873+1) The time complexity of the rank-119896 decomposi-
tion of the matrix with the size of (119903 + 119868) times (119903 + 119868119873minus1
) is
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Mathematical Problems in Engineering
It is worthy to note that when new samples are available ithas no need to recompute themode-119899 covariancematrix of alltraining samplesWe just have to solve themode-119899 covariancematrix of new added samples and the difference betweenoriginal training samples and new added samples Howeverlike traditional incremental PCA eigen decomposition on119862(119899) has been repeated once new samples are added It is
certain that the repeated eigen decomposition on 119862(119899) will
cause heavy computational cost which is called ldquothe eigendecomposition updating problemrdquo For traditional vector-based incremental learning algorithm the updated-SVDtechnique is proposed in [25] to fit the eigen decompositionThis paper will introduce the updated-SVD technique intotensor-based incremental learning algorithm
For original samples the mode-119899 covariance matrix is
119862(119899)
old =
119870
sum
119894=1
(119883(119899)
119894minus 119883(119899)
old) (119883(119899)
119894minus 119883(119899)
old)119879
= 119878(119899)
old119878(119899)119879
old
(36)
where 119878(119899)
old = [119883(119899)
1minus 119883(119899)
old 119883(119899)
119870minus 119883(119899)
old] According tothe eigen decomposition 119878
(119899)
old = svd(119880Σ119881119879) we can get the
following
119878(119899)
old119878(119899)119879
old = (119880Σ119881119879) (119880Σ119881
119879)119879
= 119880Σ119881119879119881Σ119880119879
= 119880Σ2119880119879
= eig (119862(119899)
old)
(37)
So it is easy to derive that the eigen-vector of 119862(119899)
old is the leftsingular vector of 119878
(119899)
old and the eigen-values correspond to theextraction of left singular values of 119878
(119899)
oldFor new samples the mode-119899 covariance matrix is
119862(119899)
new =
119870+119879
sum
119894=119870+1
(119883(119899)
119894minus 119883(119899)
new) (119883(119899)
119894minus 119883(119899)
new)
119879
= 119878(119899)
new119878(119899)119879
new
(38)
where 119878(119899)
new = [119883(119899)
1minus119883(119899)
new 119883(119899)
119870minus119883(119899)
new] According to (35)the updated mode-119899 covariance matrix is defined as follows
119862(119899)
= 119862(119899)
old + 119862(119899)
new +119870119879
119870 + 119879
times (119883(119899)
old minus 119883(119899)
new) (119883(119899)
old minus 119883(119899)
new)
119879
= 119878(119899)
119878(119899)119879
(39)
where 119878(119899)
= [119878(119899)
old 119878(119899)
new radic119870119879(119870 + 119879)(119883(119899)
old minus 119883(119899)
new)] There-fore the updated projective matrix 119880
(119899) is the eigen-vectorscorresponding to the largest 119875
119899eigen-values of 119878
(119899) Themainsteps of incremental tensor principal component analysis arelisted as follows
input original samples and new added samplesoutput 119873 projective matrices
Step 1 Computing and saving
eig (119862(119899)
old) asymp [119880(119899)
119903 Σ(119899)
119903] (40)
Step 2 For 119894 = 1 119873
119861 = [
[
119878(119899)
new radic119870119879
119870 + 119879(119883(119899)
old minus 119883(119899)
new)]
]
(41)
ProcessingQR decomposition for the following equation
QR = (119868 minus 119880(119899)
119903119880(119899)119879
119903) 119861 (42)
Processing SVD decomposition for the following equa-tion
svd[radicΣ(119899)
119903 119880(119899)119879
119903119861
0 119877
] = Σ119879 (43)
Computing the following equation
[119878(119899)
old 119861] asymp ([119880(119899)
119903 119876] ) Σ([
119881(119899)
1199030
0 119868] )
119879
(44)
Then the updated projective matrix is computed asfollows
119880(119899)
= [119880(119899)
119903 119876] (45)
end
Step 3 Repeating the above steps until the incrementallearning is finished
33TheComplexityAnalysis For tensor dataset119883 = 1198831
119883119872
119883119894
isin R1198681timessdotsdotsdottimes119868119873 without loss of generality it is assumedthat all dimensions are equal that is 119868
1= sdot sdot sdot = 119868
119873= 119868
Vector-based PCA converts all data into vector andconstructs a data matrix 119883 isin R119872times119863 119863 = 119868
119873 For vector-based PCA themain computational cost contains three partsthe computation of the covariance matrix the eigen decom-position of the covariance matrix and the computation oflow-dimensional features The time complexity to computecovariance matrix is 119874(119872119868
2119873) the time complexity of the
eigen decomposition is 119874(1198683119873
) and the time complexity tocompute low-dimensional features is 119874(119872119868
2119873+ 1198683119873
)Letting the iterative number be 1 the time complexity
to computing the mode-119899 covariance matrix for MPCAis 119874(119872119873119868
119873+1) the time complexity of eigen decomposi-
tion is 119874(1198731198683) and the time complexity to compute low-
dimensional features is 119874(119872119873119868119873+1
) so the total time com-plexity is 119874(119872119873119868
119873+1+ 1198731198683) Considering the time complex-
ity MPCA is superior to PCAFor ITPCA it is assumed that 119879 incremental datasets are
added MPCA has to recompute mode-119899 covariance matrixand conducts eigen decomposition for initial dataset andincremental dataset The more the training samples are thehigher time complexity they have If updated-SVD is used weonly need to compute QR decomposition and SVD decom-position The time complexity of QR decomposition is119874(119873119868
119873+1) The time complexity of the rank-119896 decomposi-
tion of the matrix with the size of (119903 + 119868) times (119903 + 119868119873minus1
) is
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 7
Figure 1 The samples in USPS dataset
119874(119873(119903 + 119868)1198962) It can be seen that the time complexity of
updated-SVD has nothing to do with the number of newadded samples
Taking the space complexity into account if trainingsamples are reduced into low-dimensional space and thedimension is119863 = prod
119873
119899=1119889119899 then PCA needs119863prod
119873
119899=1119868119899bytes to
save projectivematrices andMPCAneedssum119873
119899=1119868119899119889119899bytes So
MPCA has lower space complexity than PCA For incremen-tal learning both PCA and MPCA need 119872prod
119873
119899=1119868119899bytes to
save initial training samples ITPCA only need sum119873
119899=11198682
119899bytes
to keep mode-119899 covariance matrix
4 Experiments
In this section the handwritten digit recognition experimentson the USPS image dataset are conducted to evaluate theperformance of incremental tensor principal componentanalysisTheUSPS handwritten digit dataset has 9298 imagesfrom zero to nine shown in Figure 1 For each image the sizeis 16 times 16 In this paper we choose 1000 images and dividethem into initial training samples new added samples andtest samples Furthermore the nearest neighbor classifier isemployed to classify the low-dimensional featuresThe recog-nition results are compared with PCA [26] IPCA [15] andMPCA [11]
At first we choose 70 samples belonging to four classesfrom initial training samples For each time of incrementallearning 70 samples which belong to the other two classesare added So after three times the class labels of the trainingsamples are ten and there are 70 samples in each class Theresting samples of original training samples are considered astesting dataset All algorithms are implemented in MATLAB2010 on an Intel (R) Core (TM) i5-3210M CPU 25GHzwith 4G RAM
Firstly 36 PCs are preserved and fed into the nearestneighbor classifier to obtain the recognition results Theresults are plotted in Figure 2 It can be seen that MPCA andITPCA are better than PCA and IPCA for initial learningthe probable reason is that MPCA and ITPCA employ tensorrepresentation to preserve the structure information
The recognition results under different learning stages areshown in Figures 3 4 and 5 It can be seen that the recogni-tion results of these four methods always fluctuate violently
6 65 7 75 8 85 9 95 10093
0935
094
0945
095
0955
096
0965
097
0975
098
The number of class labels
The r
ecog
nitio
n re
sults
PCAIPCA
MPCAITPCA
Figure 2 The recognition results for 36 PCs of the initial learning
50 100 150 200 250094
0945
095
0955
096
0965
097
0975
098
0985
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 3 The recognition results of different methods of the firstincremental learning
when the numbers of low-dimensional features are smallHowever with the increment of the feature number therecognition performance keeps stable Generally MPCA andITPCA are superior to PCA and IPCA Although ITPCAhave comparative performance at first two learning ITPCAbegin to surmount MPCA after the third learning Figure 6has given the best recognition percents of different methodsWe can get the same conclusion as shown in Figures 3 4 and5
The time and space complexity of different methods areshown in Figures 7 and 8 respectively Taking the timecomplexity into account it can be found that at the stage ofinitial learning PCA has the lowest time complexity With
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Mathematical Problems in Engineering
50 100 150 200 2500945
095
0955
096
0965
097
0975
PCAIPCA
MPCAITPCA
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 4The recognition results of different methods of the secondincremental learning
PCAIPCA
MPCAITPCA
50 100 150 200 250091
0915
092
0925
093
0935
094
0945
095
0955
096
The number of low-dimensional features
The r
ecog
nitio
n re
sults
Figure 5 The recognition results of different methods of the thirdincremental learning
the increment of new samples the time complexity ofPCA and MPCA grows greatly and the time complexityof IPCA and ITPCA becomes stable ITPCA has slowerincrement than MPCAThe reason is that ITPCA introducesincremental learning based on the updated-SVD techniqueand avoids decomposing the mode-119899 covariance matrix oforiginal samples again Considering the space complexity itis easy to find that ITPCA has the lowest space complexityamong four compared methods
092
The r
ecog
nitio
n re
sults
093
094
095
096
097
098
099
Class 6 Class 8The number of class labels
Class 10
PCAIPCA
MPCAITPCA
Figure 6 The comparison of recognition performance of differentmethods
4 6 8 1001
02
03
04
05
06
07
08
The number of class labels
The t
ime c
ompl
exity
(s)
PCAIPCA
MPCAITPCA
Figure 7 The comparison of time complexity of different methods
5 Conclusion
This paper presents incremental tensor principal componentanalysis based on updated-SVD technique to take full advan-tage of redundancy of the space structure information andonline learning Furthermore this paper proves that PCAand 2DPCA are the special cases of MPCA and all of themcan be unified into the graph embedding framework This
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 9
PCAIPCA
MPCAITPCA
4 6 8 100
1
2
3
4
5
6
7
8
9
The number of class labels
The s
pace
com
plex
ity (
M)
Figure 8The comparison of space complexity of differentmethods
paper also analyzes incremental learning based on singlesample and multiple samples in detail The experimentson handwritten digit recognition have demonstrated thatprincipal component analysis based on tensor representationis superior to tensor principal component analysis based onvector representation Although at the stage of initial learn-ing MPCA has better recognition performance than ITPCAthe learning capability of ITPCA becomes well gradually andexceeds MPCA Moreover even if new samples are addedthe time and space complexity of ITPCA still keep slowerincrement
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This present work has been funded with support from theNational Natural Science Foundation of China (61272448)the Doctoral Fund of Ministry of Education of China(20110181130007) the Young Scientist Project of ChengduUniversity (no 2013XJZ21)
References
[1] H Lu K N Plataniotis and A N Venetsanopoulos ldquoUncorre-lated multilinear discriminant analysis with regularization andaggregation for tensor object recognitionrdquo IEEETransactions onNeural Networks vol 20 no 1 pp 103ndash123 2009
[2] C Liu K He J-L Zhou and C-B Gao ldquoDiscriminant orthog-onal rank-one tensor projections for face recognitionrdquo in Intel-ligent Information and Database Systems N T Nguyen C-GKim and A Janiak Eds vol 6592 of Lecture Notes in ComputerScience pp 203ndash211 2011
[3] G-F Lu Z Lin andZ Jin ldquoFace recognition using discriminantlocality preserving projections based on maximum margincriterionrdquo Pattern Recognition vol 43 no 10 pp 3572ndash35792010
[4] D Tao X Li X Wu and S J Maybank ldquoGeneral tensordiscriminant analysis and Gabor features for gait recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 10 pp 1700ndash1715 2007
[5] F Nie S Xiang Y Song and C Zhang ldquoExtracting the optimaldimensionality for local tensor discriminant analysisrdquo PatternRecognition vol 42 no 1 pp 105ndash114 2009
[6] Z-Z Yu C-C Jia W Pang C-Y Zhang and L-H ZhongldquoTensor discriminant analysis with multiscale features foraction modeling and categorizationrdquo IEEE Signal ProcessingLetters vol 19 no 2 pp 95ndash98 2012
[7] S J Wang J Yang M F Sun X J Peng M M Sun and CG Zhou ldquoSparse tensor discriminant color space for face ver-ificationrdquo IEEE Transactions on Neural Networks and LearningSystems vol 23 no 6 pp 876ndash888 2012
[8] J L Minoi C EThomaz and D F Gillies ldquoTensor-based mul-tivariate statistical discriminant methods for face applicationsrdquoin Proceedings of the International Conference on Statistics in Sci-ence Business and Engineering (ICSSBE rsquo12) pp 1ndash6 September2012
[9] N Tang X Gao andX Li ldquoTensor subclass discriminant analy-sis for radar target classificationrdquo Electronics Letters vol 48 no8 pp 455ndash456 2012
[10] H Lu K N Plataniotis and A N Venetsanopoulos ldquoA surveyof multilinear subspace learning for tensor datardquo Pattern Recog-nition vol 44 no 7 pp 1540ndash1551 2011
[11] H Lu K N Plataniotis and A N Venetsanopoulos ldquoMPCAmultilinear principal component analysis of tensor objectsrdquoIEEE Transactions on Neural Networks vol 19 no 1 pp 18ndash392008
[12] S YanD Xu B ZhangH-J ZhangQ Yang and S Lin ldquoGraphembedding and extensions a general framework for dimen-sionality reductionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 29 no 1 pp 40ndash51 2007
[13] R Plamondon and SN Srihari ldquoOn-line and off-line handwrit-ing recognition a comprehensive surveyrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 22 no 1 pp 63ndash84 2000
[14] C M Johnson ldquoA survey of current research on online com-munities of practicerdquo Internet and Higher Education vol 4 no1 pp 45ndash60 2001
[15] P Hall D Marshall and R Martin ldquoMerging and splittingeigenspace modelsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 22 no 9 pp 1042ndash1049 2000
[16] J Sun D Tao S Papadimitriou P S Yu and C FaloutsosldquoIncremental tensor analysis theory and applicationsrdquo ACMTransactions on Knowledge Discovery from Data vol 2 no 3article 11 2008
[17] J Wen X Gao Y Yuan D Tao and J Li ldquoIncremental tensorbiased discriminant analysis a new color-based visual trackingmethodrdquo Neurocomputing vol 73 no 4ndash6 pp 827ndash839 2010
[18] J-G Wang E Sung and W-Y Yau ldquoIncremental two-dimen-sional linear discriminant analysis with applications to facerecognitionrdquo Journal of Network and Computer Applicationsvol 33 no 3 pp 314ndash322 2010
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Mathematical Problems in Engineering
[19] X Qiao R Xu Y-W Chen T Igarashi K Nakao and AKashimoto ldquoGeneralized N-Dimensional Principal Compo-nent Analysis (GND-PCA) based statistical appearance mod-eling of facial images with multiple modesrdquo IPSJ Transactionson Computer Vision and Applications vol 1 pp 231ndash241 2009
[20] H Kong X Li L Wang E K Teoh J-G Wang and RVenkateswarlu ldquoGeneralized 2Dprincipal component analysisrdquoin Proceedings of the IEEE International Joint Conference onNeural Networks (IJCNN rsquo05) vol 1 pp 108ndash113 August 2005
[21] D Zhang and Z-H Zhou ldquo(2D)2 PCA two-directional two-dimensional PCA for efficient face representation and recogni-tionrdquo Neurocomputing vol 69 no 1ndash3 pp 224ndash231 2005
[22] J Ye ldquoGeneralized low rank approximations of matricesrdquoMachine Learning vol 61 no 1ndash3 pp 167ndash191 2005
[23] J Yang D Zhang A F Frangi and J-Y Yang ldquoTwo-dimen-sional PCA a new approach to appearance-based face represen-tation and recognitionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 1 pp 131ndash137 2004
[24] J Yang and J-Y Yang ldquoFrom image vector to matrix a straight-forward image projection technique-IMPCA vs PCArdquo PatternRecognition vol 35 no 9 pp 1997ndash1999 2002
[25] J Kwok and H Zhao ldquoIncremental eigen decompositionrdquo inProceedings of the International Conference on Artificial NeuralNetworks (ICANN rsquo03) pp 270ndash273 Istanbul Turkey June2003
[26] PN Belhumeur J PHespanha andD J Kriegman ldquoEigenfacesvs fisherfaces recognition using class specific linear projec-tionrdquo IEEE Transactions on Pattern Analysis and Machine Intel-ligence vol 19 no 7 pp 711ndash720 1997
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of