85
Building Compatible Bases on Graphs, Images, and Manifolds Davide Eynard Institute of Computational Science, Faculty of Informatics University of Lugano, Switzerland SIAM-IS, 14 May 2014 Based on joint works with Artiom Kovnatsky, Michael M. Bronstein, Klaus Glashoff, and Alexander M. Bronstein 1 / 85

Building Compatible Bases on Graphs, Images, and Manifolds

Embed Size (px)

DESCRIPTION

Spectral methods are used in computer graphics, machine learning, and computer vision, where many important problems boil down to constructing a Laplacian operator and finding its eigenvalues and eigenfunctions. We show how to generalize spectral geometry to multiple data spaces. Our construction is based on the idea of simultaneous diagonalization of Laplacian operators. We describe this problem and discuss numerical methods for its solution. We provide several synthetic and real examples of manifold learning, object classification, and clustering, showing that the joint spectral geometry better captures the inherent structure of multi-modal data. Talk at SIAM-IS 2014 (http://www.math.hkbu.edu.hk/SIAM-IS14/). A big thanks to Michael Bronstein for providing a great set of slides this presentation is a mere extension of.

Citation preview

  • 1. Building Compatible Bases on Graphs, Images, and Manifolds Davide Eynard Institute of Computational Science, Faculty of Informatics University of Lugano, Switzerland SIAM-IS, 14 May 2014 Based on joint works with Artiom Kovnatsky, Michael M. Bronstein, Klaus Glasho, and Alexander M. Bronstein 1 / 85

2. Ambiguous data Cayenne 2 / 85 3. Ambiguous data Cayenne City in Guiana 3 / 85 4. Ambiguous data Cayenne City in Guiana Pepper 4 / 85 5. Ambiguous data Cayenne City in Guiana Pepper Porsche car 5 / 85 6. Multimodal data analysis Cayenne, Porsche, car, automobile, SUV,... Chili, pepper, red, hot, food, plant, spice,... San Francisco, city, USA, California, hill,... Landrover, SUV, car, Jeep, 4x4, terrain,... Cayenne, city, Guiana, America, ocean,... Cayenne, pepper, hot, plant, spice, red,... 6 / 85 7. Multimodal data analysis Chili, food San Francisco, USA Landrover, SUV Cayenne, city Cayenne, Porsche Cayenne, pepper Image space Tag space 7 / 85 8. Multimodal data analysis Chili, food San Francisco, USA Landrover, SUV Cayenne, city Cayenne, Porsche Cayenne, pepper Image space Tag space 8 / 85 9. Multimodal data analysis Chili, food San Francisco, USA Landrover, SUV Cayenne, city Cayenne, Porsche Cayenne, pepper Image space Tag space 9 / 85 10. Discrete manifolds xi Graph (X, E) Discrete set of n vertices X = {x1, . . . , xn} 10 / 85 11. Discrete manifolds xj wij xi Graph (X, E) Discrete set of n vertices X = {x1, . . . , xn} Gaussian edge weight wij = e xixj 2 22 (i, j) E 0 else 11 / 85 12. Discrete manifolds xi Graph (X, E) Discrete set of n vertices X = {x1, . . . , xn} Gaussian edge weight wij = e xixj 2 22 (i, j) E 0 else Unnormalized Laplacian operator L = D W D = diag( j=i wij) (vertex weight) 12 / 85 13. Discrete manifolds xi Graph (X, E) Discrete set of n vertices X = {x1, . . . , xn} Gaussian edge weight wij = e xixj 2 22 (i, j) E 0 else Unnormalized Laplacian operator L = D W D = diag( j=i wij) (vertex weight) Symmetric normalized Laplacian Lsym = D1/2 LD1/2 13 / 85 14. Laplacian eigenvalues and eigenfunctions Eigenvalue problem: L = = diag(1, . . . , n) are the eigenvalues satisfying 0 = 1 2 . . . n = (1, . . . , n) are the orthonormal eigenfunctions 14 / 85 15. Spectral geometry Laplacian eigenmap: m-dimensional embedding of X U = argmin URnm tr (UT LU) s.t. UT U = I Belkin, Niyogi 2001 15 / 85 16. Spectral geometry Laplacian eigenmap: m-dimensional embedding of X using the rst eigenvectors of the Laplacian U = (1, . . . , m) Belkin, Niyogi 2001 16 / 85 17. Heat equation Heat diusion on X is governed by the heat equation Lf(t) + t f(t) = 0, f(0) = u, where f(t) is the amount of heat at time t 17 / 85 18. Heat equation Heat diusion on X is governed by the heat equation Lf(t) + t f(t) = 0, f(0) = u, where f(t) is the amount of heat at time t 18 / 85 19. Heat equation Heat diusion on X is governed by the heat equation Lf(t) + t f(t) = 0, f(0) = u, where f(t) is the amount of heat at time t Heat operator (or heat kernel) Ht = etL = et T provides the solution of the heat equation f(t) = Ht f(0) 19 / 85 20. Heat equation Heat diusion on X is governed by the heat equation Lf(t) + t f(t) = 0, f(0) = u, where f(t) is the amount of heat at time t Heat operator (or heat kernel) Ht = etL = et T provides the solution of the heat equation f(t) = Ht f(0) How much heat is transferred from point xi to point xj in time t 20 / 85 21. Spectral geometry Diusion map: m-dimensional embedding of X using the heat kernel U = (et1 1, . . . , etm m) Berard et al. 1994; Coifman, Lafon 2006 21 / 85 22. Spectral geometry Diusion distance: crosstalk between heat kernels d2 t (xp, xq) = n i=1 ((Ht )pi (Ht )qi)2 Berard et al. 1994; Coifman, Lafon 2006 22 / 85 23. Spectral geometry Diusion distance: crosstalk between heat kernels d2 t (xp, xq) = n i=1 ((Ht )pi (Ht )qi)2 = n i=1 e2ti (pi qi)2 Berard et al. 1994; Coifman, Lafon 2006 23 / 85 24. Spectral geometry Diusion distance: Euclidean distance in the diusion map space dt(xp, xq) = Up Uq 2 Berard et al. 1994; Coifman, Lafon 2006 24 / 85 25. Spectral geometry Diusion distance: Euclidean distance in the diusion map space dt(xp, xq) = Up Uq 2 Berard et al. 1994; Coifman, Lafon 2006 25 / 85 26. Spectral geometry K-means Spectral clustering: instead of applying K-means clustering the original data space... Ng et al. 2001 26 / 85 27. Spectral geometry K-means Spectral clustering: instead of applying K-means clustering the original data space, apply it in the Laplacian eigenspace Ng et al. 2001 27 / 85 28. Spectral clustering Unimodal Ng et al. 2001 28 / 85 29. Spectral clustering Unimodal Ng et al. 2001 ; Eynard, Bronstein2 , Glasho 2012 29 / 85 30. Multimodal spectral clustering Unimodal Modality 1 Modality 2 Ng et al. 2001 ; Eynard, Bronstein2 , Glasho 2012 30 / 85 31. Diagonalization of the Laplacian Eigendecomposition can be posed as the minimization problem min T =I o(T L) with o-diagonality penalty o(X) = i=j x2 ij. 31 / 85 32. Diagonalization of the Laplacian Eigendecomposition can be posed as the minimization problem min T =I o(T L) with o-diagonality penalty o(X) = i=j x2 ij. Jacobi iteration: compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Jacobi 1846 32 / 85 33. Diagonalization of the Laplacian Eigendecomposition can be posed as the minimization problem min T =I o(T L) with o-diagonality penalty o(X) = i=j x2 ij. Jacobi iteration: compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Analytic expression for optimal rotation for given pivot Jacobi 1846 33 / 85 34. Diagonalization of the Laplacian Eigendecomposition can be posed as the minimization problem min T =I o(T L) with o-diagonality penalty o(X) = i=j x2 ij. Jacobi iteration: compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Analytic expression for optimal rotation for given pivot Rotation applied in place no matrix multiplication Jacobi 1846 34 / 85 35. Diagonalization of the Laplacian Eigendecomposition can be posed as the minimization problem min T =I o(T L) with o-diagonality penalty o(X) = i=j x2 ij. Jacobi iteration: compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Analytic expression for optimal rotation for given pivot Rotation applied in place no matrix multiplication Guaranteed decrease of the o-diagonal terms Jacobi 1846 35 / 85 36. Diagonalization of the Laplacian Eigendecomposition can be posed as the minimization problem min T =I o(T L) with o-diagonality penalty o(X) = i=j x2 ij. Jacobi iteration: compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Analytic expression for optimal rotation for given pivot Rotation applied in place no matrix multiplication Guaranteed decrease of the o-diagonal terms Orthonormality guaranteed by construction Jacobi 1846 36 / 85 37. Joint approximate diagonalization Laplacians of X and Y are diagonalized independently: min T =I,T =I o(T LX) + o(T LY ) 2 3 4 5 2 3 4 5 Cardoso 1995; Eynard, Bronstein2 , Glasho 2012 37 / 85 38. Joint approximate diagonalization Diagonalize Laplacians of X and Y simultaneously: min T =I o( T LX ) + o( T LY ) 2 3 4 5 2 3 4 5 Cardoso 1995; Eynard, Bronstein2 , Glasho 2012 38 / 85 39. Joint approximate diagonalization Diagonalize Laplacians of X and Y simultaneously: min T =I o( T LX ) + o( T LY ) In most cases, is only an approximate eigenbasis Cardoso 1995; Eynard, Bronstein2 , Glasho 2012 39 / 85 40. Joint approximate diagonalization Diagonalize Laplacians of X and Y simultaneously: min T =I o( T LX ) + o( T LY ) In most cases, is only an approximate eigenbasis Modied Jacobi iteration (JADE): compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Cardoso 1995; Eynard, Bronstein2 , Glasho 2012 40 / 85 41. Joint approximate diagonalization Diagonalize Laplacians of X and Y simultaneously: min T =I o( T LX ) + o( T LY ) In most cases, is only an approximate eigenbasis Modied Jacobi iteration (JADE): compose = R3R2R1 as a sequence of Givens rotations, where each new rotation tries to reduce the o-diagonal terms Overall complexity akin to the standard Jacobi iteration Cardoso 1995; Eynard, Bronstein2 , Glasho 2012 41 / 85 42. Multimodal spectral clustering Modality 1 Modality 2 Multimodal (JADE) Ng et al. 2001; Eynard, Bronstein2 , Glasho 2012 42 / 85 43. Multimodal spectral clustering Modality 1 Modality 2 Multimodal (JADE) Ng et al. 2001; Eynard, Bronstein2 , Glasho 2012 43 / 85 44. Disambiguating NUS dataset 0.076955 0.056509 0.041308 0.029242 0.022934 0.022272 0.020757 0.0203 Subset of NUS-WIDE dataset Annotated images belonging to 7 ambiguous classes Modality 1: 1000-dimensional distributions of frequent tags Modality 2: 64-dimensional color histogram image descriptors Laplacians: Gaussian weights with 20 nearest neighbors Eynard, Bronstein2 , Glasho 2012; data: Chua et al. 2009 44 / 85 45. Disambiguating NUS dataset california, pair, water, animals sunrise, sky, water, mountains, nature water, underwater, tiger, fauna, fishing water, wildlife, zoo, tiger, nature mountains, rocks, water, trees sunset, rock, tree, waterfall, forest waterfall, water, mountain, wood nature, ocean, sea, blue, pool, florida water, creek, rocks, waterfall, mountain underwater, nature, sea, coral, reef reef, underwater, sea, fish, coral reef, dive, scuba, underwater, fish underwater, pacific, fish, reef, macro sea, nature, scuba, ocean, blue, water maldives, coral, underwater, fish fish, scuba, water, diving, photography coral, diving, reef, nature, scuba explore, tropical, wildlife, coral, dive mountains, oregon, waterfalls, sunlight ocean waterfalls, trees, nature, river, sky tiger, bravo sea, wildlife, water, animal, ocean wild, cat, feline, asia, safari, stripes cat, nature, tiger, australia, beauty mountain, water, waterfall, nice, walk animal, ocean, animals Tag clusters (ambiguity e.g. between underwater tiger and water) Eynard, Bronstein2 , Glasho 2012; data: Chua et al. 2009 45 / 85 46. Disambiguating NUS dataset sea, coral, reef sea, fish, coral underwater, fish fish, reef, macro ocean, blue, water underwater, fish diving, photography nature, scuba wildlife, coral, dive mountains, oregon, waterfalls, sunlight ocean waterfalls, trees, nature, river, sky tiger, bravo sea, wildlife, water, animal, ocean wild, cat, feline, asia, safari, stripes cat, nature, tiger, australia, beauty mountain, water, waterfall, nice, walk animal, ocean, animals Color histogram clusters (ambiguity between similarly colored images) Eynard, Bronstein2 , Glasho 2012; data: Chua et al. 2009 46 / 85 47. Disambiguating NUS dataset Multimodal clusters Eynard, Bronstein2 , Glasho 2012; data: Chua et al. 2009 47 / 85 48. Drawbacks of JADE In many applications, we do not need the whole basis, just the rst k n eigenvectors 48 / 85 49. Drawbacks of JADE In many applications, we do not need the whole basis, just the rst k n eigenvectors Explicit assumption of orthonormality of the joint basis restricts Laplacian discretization to symmetric matrices only 49 / 85 50. Drawbacks of JADE In many applications, we do not need the whole basis, just the rst k n eigenvectors Explicit assumption of orthonormality of the joint basis restricts Laplacian discretization to symmetric matrices only Requires bijective known correspondence between X and Y 50 / 85 51. Bijective correspondence Chili, food San Francisco, USA Landrover, SUV Cayenne, city Cayenne, Porsche Cayenne, pepper Image space Tag space 1:1 51 / 85 52. Partial correspondence Chili, food San Francisco, USA Landrover, SUV Cayenne, city Cayenne, Porsche Cayenne, pepper Image space Tag space 52 / 85 53. Partial correspondence Chili, food San Francisco, USA Landrover, SUV Cayenne, city Cayenne, Porsche Cayenne, pepper Marijuana, cannabis Alligator Crocodile Bear Apple MacBook Orange Image space Tag space 53 / 85 54. Partial correspondence Two discrete manifolds with dierent number of vertices, X = {x1, . . . , xn} and Y = {x1, . . . , xm} 54 / 85 55. Partial correspondence Two discrete manifolds with dierent number of vertices, X = {x1, . . . , xn} and Y = {x1, . . . , xm} Laplacians LX of size n n and LY of size m m 55 / 85 56. Partial correspondence Two discrete manifolds with dierent number of vertices, X = {x1, . . . , xn} and Y = {x1, . . . , xm} Laplacians LX of size n n and LY of size m m Set of corresponding functions F = (f1, . . . , fq) and G = (g1, . . . , gq) 56 / 85 57. Partial correspondence Two discrete manifolds with dierent number of vertices, X = {x1, . . . , xn} and Y = {x1, . . . , xm} Laplacians LX of size n n and LY of size m m Set of corresponding functions F = (f1, . . . , fq) and G = (g1, . . . , gq) We cannot nd a common eigenbasis of Laplacians LX and LY , because they now have dierent dimensions 57 / 85 58. Coupled diagonalization Find two sets of coupled approximate eigenvectors , min , o( T LX ) + o( T LY ) + FT GT 2 F s.t. T = I, T = I Kovnatsky, Bronstein2 , Glasho, Kimmel 2013 58 / 85 59. Perturbation of joint eigenbasis Theorem (Cardoso 1994) Let A = T be a symmetric matrix with simple -separated spectrum (|i j| ) and B = T + E. Then, the joint approximate eigenvectors of A, B satisfy i = i + j=i ijj + O( 2 ) where ij = T i Ej/2(j i) E 2/2 Cardoso 1994 59 / 85 60. Perturbation of joint eigenbasis Theorem (Cardoso 1994) Let A = T be a symmetric matrix with simple -separated spectrum (|i j| ) and B = T + E. Then, the joint approximate eigenvectors of A, B satisfy i = i + j=i ijj + O( 2 ) where ij = T i Ej/2(j i) E 2/2 Consequently, span{1, . . . , k} span{1, . . . , k} Cardoso 1994; Kovnatsky, Bronstein2 , Glasho, Kimmel 2013 60 / 85 61. Perturbation of joint eigenbasis Theorem (Cardoso 1994) Let A = T be a symmetric matrix with simple -separated spectrum (|i j| ) and B = T + E. Then, the joint approximate eigenvectors of A, B satisfy i = i + j=i ijj + O( 2 ) where ij = T i Ej/2(j i) E 2/2 Consequently, span{1, . . . , k} span{1, . . . , k} i.e., k rst approximate joint eigenvectors can be expressed as linear combinations of k k eigenvectors: S, R, where = (1, . . . , k ), X = diag(X 1 , . . . , X k ) = (1, . . . , k ), Y = diag(Y 1 , . . . , Y k ) Cardoso 1994; Kovnatsky, Bronstein2 , Glasho, Kimmel 2013 61 / 85 62. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min , o( T LX ) + o( T LY ) + FT GT 2 F s.t. T = I, T = I Coupling: given a set of corresponding vectors F, G, make their Fourier coecients coincide T F T G Kovnatsky, Bronstein2 , Glasho, Kimmel 2013 62 / 85 63. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT T LX R) + o(ST T LY S) + FT R GT S 2 F s.t. RT T R = I, ST T S = I Coupling: given a set of corresponding vectors F, G, make their Fourier coecients coincide T F T G Based on perturbation Theorem, express the joint approximate eigenbases as linear combinations = S, = R Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 63 / 85 64. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT T LX X R) + o(ST T LY Y S) + FT R GT S 2 F s.t. RT T I R = I, ST T I S = I Coupling: given a set of corresponding vectors F, G, make their Fourier coecients coincide T F T G Based on perturbation Theorem, express the joint approximate eigenbases as linear combinations = S, = R Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 64 / 85 65. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT XR) + o(ST Y S) + FT R GT S 2 F s.t. RT R = I, ST S = I Coupling: given a set of corresponding vectors F, G, make their Fourier coecients coincide T F T G Based on perturbation Theorem, express the joint approximate eigenbases as linear combinations = S, = R Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 65 / 85 66. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT XR) + o(ST Y S) + 1 FT + R GT + S 2 F +2 FT R GT S 2 F s.t. RT R = I, ST S = I Coupling: given a set of corresponding vectors F, G, make their Fourier coecients coincide T F T G Based on perturbation Theorem, express the joint approximate eigenbases as linear combinations = S, = R Decoupling: given a set of corresponding vectors F, G, make their Fourier coecients as dierent as possible Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 66 / 85 67. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT XR) + o(ST Y S) + 1 FT + R GT + S 2 F +2 FT R GT S 2 F s.t. RT R = I, ST S = I Laplacians are not used explicitly: their rst k eigenfunctions , and eigenvalues X, Y are pre-computed Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 67 / 85 68. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT XR) + o(ST Y S) + 1 FT + R GT + S 2 F +2 FT R GT S 2 F s.t. RT R = I, ST S = I Laplacians are not used explicitly: their rst k eigenfunctions , and eigenvalues X, Y are pre-computed - any Laplacian can be used! Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 68 / 85 69. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT XR) + o(ST Y S) + 1 FT + R GT + S 2 F +2 FT R GT S 2 F s.t. RT R = I, ST S = I Laplacians are not used explicitly: their rst k eigenfunctions , and eigenvalues X, Y are pre-computed - any Laplacian can be used! Problem size is 2k k, independent of the number of samples Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 69 / 85 70. Subspace coupled diagonalization Find two sets of coupled approximate eigenvectors , min R,S o(RT XR) + o(ST Y S) + 1 FT + R GT + S 2 F +2 FT R GT S 2 F s.t. RT R = I, ST S = I Laplacians are not used explicitly: their rst k eigenfunctions , and eigenvalues X, Y are pre-computed - any Laplacian can be used! Problem size is 2k k, independent of the number of samples No bijective correspondence Kovnatsky, Bronstein2 , Glasho, Kimmel 2013; Cardoso 1994 70 / 85 71. Clustering results Accuracy (%) Method Circles Text Caltech NUS Digits Reuters #points 800 800 105 145 2000 600 Uncoupled 53.0 60.4 78.1 80.7 78.9 52.3 Harmonic Mean 95.6 97.2 87.6 89.0 87.0 52.3 Arithmetic Mean 96.5 96.9 87.6 95.2 82.8 52.2 Comraf 40.8 60.8 86.9 81.6 53.2 MVSC 95.6 97.2 81.0 89.0 83.1 52.3 MultiNMF 41.1 50.5 77.4 87.2 53.1 SC-ML 98.2 97.6 88.6 94.5 87.8 52.8 JADE 100 98.4 86.7 93.1 82.5 52.3 CD pos 10% 52.5 54.5 78.7 78.6 94.2 53.7 20% 61.3 60.0 80.8 82.9 94.1 54.2 60% 93.7 86.5 87.0 87.2 93.9 54.7 100% 98.9 96.8 89.5 94.5 93.9 54.8 pos+neg 10% 67.3 63.6 86.5 92.7 94.9 59.0 20% 69.6 67.8 87.9 93.3 94.8 57.6 60% 95.2 87.0 89.2 94.5 94.8 57.0 Methods: Eynard 2012; Bekkerman 2007; Cai 2011; Liu 2013; Dong 2013; Data: Cai 2011; Chua 2009; Alpaydin 1998; Liu 2013; Amini 2009 71 / 85 72. Clustering results Accuracy (%) Method Circles Text Caltech NUS Digits Reuters #points 800 800 105 145 2000 600 Uncoupled 53.0 60.4 78.1 80.7 78.9 52.3 Harmonic Mean 95.6 97.2 87.6 89.0 87.0 52.3 Arithmetic Mean 96.5 96.9 87.6 95.2 82.8 52.2 Comraf 40.8 60.8 86.9 81.6 53.2 MVSC 95.6 97.2 81.0 89.0 83.1 52.3 MultiNMF 41.1 50.5 77.4 87.2 53.1 SC-ML 98.2 97.6 88.6 94.5 87.8 52.8 JADE 100 98.4 86.7 93.1 82.5 52.3 CD pos 10% 52.5 54.5 78.7 78.6 94.2 53.7 20% 61.3 60.0 80.8 82.9 94.1 54.2 60% 93.7 86.5 87.0 87.2 93.9 54.7 100% 98.9 96.8 89.5 94.5 93.9 54.8 pos+neg 10% 67.3 63.6 86.5 92.7 94.9 59.0 20% 69.6 67.8 87.9 93.3 94.8 57.6 60% 95.2 87.0 89.2 94.5 94.8 57.0 Methods: Eynard 2012; Bekkerman 2007; Cai 2011; Liu 2013; Dong 2013; Data: Cai 2011; Chua 2009; Alpaydin 1998; Liu 2013; Amini 2009 72 / 85 73. Clustering results Accuracy (%) Method Circles Text Caltech NUS Digits Reuters #points 800 800 105 145 2000 600 Uncoupled 53.0 60.4 78.1 80.7 78.9 52.3 Harmonic Mean 95.6 97.2 87.6 89.0 87.0 52.3 Arithmetic Mean 96.5 96.9 87.6 95.2 82.8 52.2 Comraf 40.8 60.8 86.9 81.6 53.2 MVSC 95.6 97.2 81.0 89.0 83.1 52.3 MultiNMF 41.1 50.5 77.4 87.2 53.1 SC-ML 98.2 97.6 88.6 94.5 87.8 52.8 JADE 100 98.4 86.7 93.1 82.5 52.3 CD pos 10% 52.5 54.5 78.7 78.6 94.2 53.7 20% 61.3 60.0 80.8 82.9 94.1 54.2 60% 93.7 86.5 87.0 87.2 93.9 54.7 100% 98.9 96.8 89.5 94.5 93.9 54.8 pos+neg 10% 67.3 63.6 86.5 92.7 94.9 59.0 20% 69.6 67.8 87.9 93.3 94.8 57.6 60% 95.2 87.0 89.2 94.5 94.8 57.0 Methods: Eynard 2012; Bekkerman 2007; Cai 2011; Liu 2013; Dong 2013; Data: Cai 2011; Chua 2009; Alpaydin 1998; Liu 2013; Amini 2009 73 / 85 74. Clustering results Accuracy (%) Method Circles Text Caltech NUS Digits Reuters #points 800 800 105 145 2000 600 Uncoupled 53.0 60.4 78.1 80.7 78.9 52.3 Harmonic Mean 95.6 97.2 87.6 89.0 87.0 52.3 Arithmetic Mean 96.5 96.9 87.6 95.2 82.8 52.2 Comraf 40.8 60.8 86.9 81.6 53.2 MVSC 95.6 97.2 81.0 89.0 83.1 52.3 MultiNMF 41.1 50.5 77.4 87.2 53.1 SC-ML 98.2 97.6 88.6 94.5 87.8 52.8 JADE 100 98.4 86.7 93.1 82.5 52.3 CD pos 10% 52.5 54.5 78.7 78.6 94.2 53.7 20% 61.3 60.0 80.8 82.9 94.1 54.2 60% 93.7 86.5 87.0 87.2 93.9 54.7 100% 98.9 96.8 89.5 94.5 93.9 54.8 pos+neg 10% 67.3 63.6 86.5 92.7 94.9 59.0 20% 69.6 67.8 87.9 93.3 94.8 57.6 60% 95.2 87.0 89.2 94.5 94.8 57.0 Methods: Eynard 2012; Bekkerman 2007; Cai 2011; Liu 2013; Dong 2013; Data: Cai 2011; Chua 2009; Alpaydin 1998; Liu 2013; Amini 2009 74 / 85 75. Clustering results Accuracy (%) Method Circles Text Caltech NUS Digits Reuters #points 800 800 105 145 2000 600 Uncoupled 53.0 60.4 78.1 80.7 78.9 52.3 Harmonic Mean 95.6 97.2 87.6 89.0 87.0 52.3 Arithmetic Mean 96.5 96.9 87.6 95.2 82.8 52.2 Comraf 40.8 60.8 86.9 81.6 53.2 MVSC 95.6 97.2 81.0 89.0 83.1 52.3 MultiNMF 41.1 50.5 77.4 87.2 53.1 SC-ML 98.2 97.6 88.6 94.5 87.8 52.8 JADE 100 98.4 86.7 93.1 82.5 52.3 CD pos 10% 52.5 54.5 78.7 78.6 94.2 53.7 20% 61.3 60.0 80.8 82.9 94.1 54.2 60% 93.7 86.5 87.0 87.2 93.9 54.7 100% 98.9 96.8 89.5 94.5 93.9 54.8 pos+neg 10% 67.3 63.6 86.5 92.7 94.9 59.0 20% 69.6 67.8 87.9 93.3 94.8 57.6 60% 95.2 87.0 89.2 94.5 94.8 57.0 Methods: Eynard 2012; Bekkerman 2007; Cai 2011; Liu 2013; Dong 2013; Data: Cai 2011; Chua 2009; Alpaydin 1998; Liu 2013; Amini 2009 75 / 85 76. Object classication Uncoupled 1 Uncoupled 2 JADE CD (pos) CD (pos+neg) 0.001 0.01 0.1 1 0.2 0.4 0.6 0.8 1 TPR 0.2 0.4 0.6 0.8 1 FPR 0.001 0.01 0.1 1 Uncoupled 1 Uncoupled 2 JAD CCO FPR CD (pos) CD (pos+neg) 76 / 85 77. Manifold Alignment 831 120100 images of a human face 698 6464 images of a statue manually coupled datasets, using 25 points sampled with FPS results compared to manifold alignment (MA) Ham, Lee, Saul 2005 77 / 85 78. Manifold Alignment MA CD Ham, Lee, Saul 2005; Eynard, Bronstein2 , Glasho 2012 78 / 85 79. Summary Framework for multimodal data analysis 79 / 85 80. Summary Framework for multimodal data analysis working in the subspace of the eigenvectors of the Laplacians 80 / 85 81. Summary Framework for multimodal data analysis working in the subspace of the eigenvectors of the Laplacians ... and with only partial correspondences 81 / 85 82. Summary Framework for multimodal data analysis working in the subspace of the eigenvectors of the Laplacians ... and with only partial correspondences We have: some papers (see our Web pages) 82 / 85 83. Summary Framework for multimodal data analysis working in the subspace of the eigenvectors of the Laplacians ... and with only partial correspondences We have: some papers (see our Web pages) code and data 83 / 85 84. Summary Framework for multimodal data analysis working in the subspace of the eigenvectors of the Laplacians ... and with only partial correspondences We have: some papers (see our Web pages) code and data extensions to other applications / elds 84 / 85 85. Thank you! 85 / 85