Extending metric multidimensional scaling with Bregman divergences
Jigang Sun and Colin Fyfe
Visualising 18 dimensional data
Outline
• Bregman divergence.• Multidimensional scaling(MDS).• Extending MDS with Bregman divergences.• Relating the Sammon mapping to mappings
with Bregman divergences. Comparison of effects and explanation.
• Conclusion
Strictly Convex function
pqpq
pq
F(p),η)-(1 ηF(q))η)1(F(η
,1 0any and domain, itsin and any for
Pictorially, the strictly convex function F(x) lies below segment connecting two points q and p.
Bregman Divergences
)(,)()(),( yyxyxyxd
is the Bregman divergence between x and y based on convex function, φ.
...!2
)('')()(')()()( 2
yyxyyxyx
Taylor Series expansion is
Bregman Divergences
Euclidean distance is a Bregman divergence
2
22
22
2
),(
2),(
2).(),(
)(
yxyxd
yxyxyxd
yyxyxyxd
xx
Kullback Leibler Divergence
i
id
ii
d
iii
i
id
ii
d
iiii
d
iii
d
iii
d
iii
d
iii
d
iii
q
ppqpd
qpeq
ppqpd
eqqpqqppqpd
qqpqqppqpd
ppp
1
12
1
122
11
11
1
log),(
)(loglog),(
)log)(log(loglog),(
)(,loglog),(
log)(
Generalised Information Divergence
• φ(z)=z log(z)
yxxyxd
yxyxxxyxd
yyxyyxxyxd
yx
log),(
loglog),(
1log,loglog),(
Other Divergences
• Itakura-Saito Divergence
• Mahalanobis distance
• Logistic loss
• Any convex function
)1log()1(log)(
)(
)log()(
1
xxxxx
xxx
xx
T
Some Properties
• dφ(x,y)≥0, with equality iff x==y.
• Not a metric since dφ(x,y)≠ dφ(y,x)
• (Though d(x,y)=(dφ(x,y)+dφ(y,x)) is symmetric)
• Convex in the first parameter.• Linear, dφ+aγ(x,y)= dφ(x,y) + a.dγ(x,y)
Multidimensional Scaling
• Creates one latent point for each data point.• The latent space is often 2 dimensional.• Positions the latent points so that they best
represent the data distances.– Two latent points are close if the two corresponding data points are close.– Two latent points are distant if the two
corresponding data points are distant.
Classical/Basic Metric MDS• We minimise the stress function
spacelatent in j and i pointsbetween distance mapped the
space datain and pointsbetween distance the
||, - || L
, || - ||
ij
ij
XX
jYiYD
ji
ji
XX
YY
ijij
ii
LD
XY
Y
jj
X data space Latent space
)abs( E
E)D(LE
error
where
N
1i
N
1ij
2N
1i
N
1ij
2BasicMDS
ijijij
ijijij
DL
Sammon Mapping (1969)
N
1i
N
1ijij
ijijij
N
1i
N
1ij ij
2ij
N
1i
N
1ij ij
2ijij
Sammon
DC
)Dabs(L E
D
E
D
)D(LE
scalarion Normalisat
error
where
11CC
Focuses on small distances: for the same error, the smaller distance is given bigger stress.
Possible Extensions
N
jijiFjiFF
N
jijiFjiF
dddJ
ddJ
1,
1,
2
)),(),,((
)),(),((
213
21
yyxx
yyxx
Bregman divergences in both data space and latent space
Or even
ijijij
ijijISij
ij
ijij
ij
ijijijIS
jiFij
jiFij
N
jijiFjiFFBMDS
LDL
DLdL
D
DL
D
LDLd
yydD
xxdL
yydxxddJ
11),(
log),(
),(
),(
)),(),,((
3
2
3211,
Metric MDs with Bregman divergence between distances
Euclidean distance on latents.
Any divergence on data
Itakura-Saito divergence between them:
to minimisedivergence.
(Sammon-like)
Moving the Latent Points
N
j ijijjii
ijijij
ijijISij
N
jijiFjiFFBMDS
DLxxx
DLL
DLdL
yydxxddJ
1
1,
11)(
11),(
),(),,((321
F1 for I.S. divergence, F2 for euclidean , F3 any divergence
The algae data set
The algae data set
Two representations
))()()()((
),()(
1 1
1 1
ijijij
N
i
N
ijijij
N
i
N
ijijijFBMDS
DFDLDFLF
DLdYE
...)()(
!3
1)(
)(
!2
1)( 3
3
3
1 1
22
2
ijijij
ijN
i
N
ijijij
ij
ijBMDS DL
dD
DFdDL
dD
DFdYE
The standard Bregman representation:
Concentrating on the residual errors:
Basic MDS is a special BMMDS• Base convex function is chosen as • And higher order derivatives are
• So
• is derived as
Sammon Mapping
xdx
xFdx
dx
xdF
xxxF
1)(,log1
)(
,log)(
2
2
...)()(
!3
1
...)()(
!3
1)(
)(
!2
1)(
33
3
1 1
33
3
1 1
22
2
ijijij
ijN
i
N
ij
Sammonij
ijijij
ijN
i
N
ijijij
ij
ijBMDS
DLdD
DFdI
DLdD
DFdDL
dD
DFdYE
Select
Then
Example 2: Extended Sammon
• Base convex function
• This is equivalent to
• The Sammon mapping is rewritten as
0, x x,log x F(x)
Sammon and Extended Sammon
• The common term • The Sammon mapping is thus an
approximation to the Extended Sammon mapping via the common term.
• The Extended Sammon mapping will do more adjustments on the basis of the higher order terms.
An Experiment on Swiss roll data set
Distance preservation
Relative standard deviation
Relative standard deviation
• On short distances, Sammon has smaller variance than BasicMDS, Extended Sammon has smaller variance than Sammon, i.e. control of small distances is enhanced.
• Large distances are given more and more freedom in the same order as above.
LCMC: local continuity meta-criterion (L. Chen 2006)
• A common measure assesses projection quality of different MDS methods.
• In terms of neighbourhood preservation.• Value between 0 and 1, the higher the better.
Quality accessed by LCMC
Why Extended Sammon outperforms Sammon
when
• Stress formation
Features of the base convex function
• Recall that the base convex function for the Extended Sammon mapping is
• Higher order derivatives are
• Even orders are positive and odd ones are negative.
Stress comparison between Sammon and Extended Sammon
Stress configured by Sammon, calculated and mapped by Extended Sammon
Stress configured by Sammon, calculated and mapped by Extended Sammon
• The Extended Sammon mapping calculates stress on the basis of the configuration found by the Sammon mapping.
• For , the mean stresses calculated by the Extended Sammon are much higher than mapped by the Sammon mapping.
• For , the calculated mean stresses are obviously lower than that of the Sammon mapping.
• The Extended Sammon makes shorter mapped distance even more short, longer even more long.
Stress formation by items
Generalisation: from MDS to Bregman divergences
• A group of MDS is generalised as
• C is a normalisation scalar which is used for quantitative comparison purposes. It does not affect the mapping results.
• Weight function for missing samples
• The Basic MDS and the Sammon mapping belong to this group.
Generalisation: from MDS to Bregman divergences
• If C=1, then set • Then the generalised MDS is the first term of
BMMDS and BMMDS is an extension of MDS. • Recall that BMMDS is equivalent to
Criterion for base convex function selection
• In order to focus on local distances and concentrate less on long distances, the base convex function must satisfy
• Not all convex functions can be considered, such as F(x)=exp(x).
• The 2nd order derivative is primarily considered. We wish it to be big for small distances and small for long distances. It represents the focusing power on local distances.
Two groups of Convex functions
• The even order derivatives are positive, odd order ones are negative.
• No 1 is that of the Extended Sammon mapping.
Focusing power
Different strategies for focusing power
• Vertical axis is logarithm of 2nd order derivative.• These use different strategies for increasing
focusing power.• In the first group, the second order derivatives
are higher and higher for small distances and lower and lower for long distances.
• In the second group, second order derivatives have limited maximum values for very small distances, but derivatives are drastically lower and lower for long distances when λ increases.
Two groups of Bregman divergences
• Elastic scaling(Victor E McGee, 1966)
Experiment on Swiss roll: The FirstGroup
•
Experiment on Swiss roll: FirstGroup
• For Extended Sammon, Itakura-Saito, • , local distances are mapped better
and better, long distances are stretched such that unfolding trend is obvious.
Distances mapping : FirstGroup
Standard deviation : FirstGroup
LCMC measure : FirstGroup
Experiment on Swiss roll:SecondGroup
Distance mapping: SecondGroup
•
StandardDeviation: SecondGroup
LCMC: SecondGroup
OpenBox, Sammon and FirstGroup
SecondGroup on OpenBox
Distance mapping: two groups
LCMC: two groups
Standard deviation: two groups
Swiss roll distances distribution
OpenBox distances distribution
Swiss roll vs OpenBox• Distances formation:• Swiss roll: proportion of longer distances is greater than that of the
shorter distances.• OpenBox: Very large quantity of a set of medium distances, small
distances take much of the rest.
• Mapping results:• Swiss roll: Long distances are stretched and local distances are usually
mapped shorter. • The OpenBox: the longest distances are not stretched obviously,
perhaps even compressed. Small distances are mapped longer than original values in data space by some methods.
• Conclusion: Tug of war between local and long distances. Trying to get the opportunities to be mapped to their original values in data space.
Left and right Bregman divergences
• All of this is with left divergences – latent points are in left position in divergence, ...
• We can show that right divergences produce extensions of curvilinear component analysis.
(Sun et al, ESANN2010)
Conclusion
• Applied Bregman divergences to multidimensional scaling.
• Shown that basic MMDS is a special case and Sammon mapping approximates a BMMDS.
• Improved upon both with 2 families of divergences.
• Shown results on two artificial data sets.