4
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 Cross-modal Embeddings for Video and Audio Retrieval Didac Sur´ ıs 1 , Amanda Duarte 1,2 , Amaia Salvador 1 , Jordi Torres 1,2 , and Xavier Gir´ o-i-Nieto 1 1 Universitat Polit´ ecnica de Catalunya - UPC, Spain 2 Barcelona Supercomputing Center - BSC, Spain {amanda.duarte, amaia.salvador, xavier.giro}@upc.edu Abstract. In this work, we explore the multi-modal information pro- vided by the Youtube-8M dataset by projecting the audio and visual features into a common feature space, to obtain joint audio-visual em- beddings. These links are used to retrieve audio samples that fit well to a given silent video, and also to retrieve images that match a given query audio. The results in terms of Recall@K obtained over a subset of YouTube-8M videos show the potential of this unsupervised approach for cross-modal feature learning. Keywords: Sonorization, embedding, retrieval, cross-modal, YouTube- 8M 1 Introduction Videos have become the next frontier in artificial intelligence. The rich semantics make them a challenging data type posing several challenges in both perceptual, reasoning or even computational level. In addition to that, the popularization of deep neural networks among the computer vision and audio communities has defined a common framework boost- ing multi-modal research. Tasks like video sonorization, speaker impersonation or self-supervised feature learning have exploited the opportunities offered by artificial neurons to project images, text and audio in a feature space where bridges across modalities can be built. This work exploits the relation between the visual and audio contents in a video clip to learn a joint embedding space with deep neural networks. We propose a joint audiovisual space to address a retrieval task formulating a query from any of the two modalities. 2 Architecture The main objective of this research is to transform the two different features representation (image and audio, separately) into a joint space.

ECCV Cross-modal Embeddings for Video and Audio Retrieval · In addition to that, the popularization of deep neural networks among the computer vision and audio communities has de

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ECCV Cross-modal Embeddings for Video and Audio Retrieval · In addition to that, the popularization of deep neural networks among the computer vision and audio communities has de

000

001

002

003

004

005

006

007

008

009

010

011

012

013

014

015

016

017

018

019

020

021

022

023

024

025

026

027

028

029

030

031

032

033

034

035

036

037

038

039

040

041

042

043

044

000

001

002

003

004

005

006

007

008

009

010

011

012

013

014

015

016

017

018

019

020

021

022

023

024

025

026

027

028

029

030

031

032

033

034

035

036

037

038

039

040

041

042

043

044

ECCV

#***ECCV

#***

Cross-modal Embeddings forVideo and Audio Retrieval

Didac Surıs1, Amanda Duarte1,2, Amaia Salvador1,Jordi Torres1,2, and Xavier Giro-i-Nieto1

1 Universitat Politecnica de Catalunya - UPC, Spain2 Barcelona Supercomputing Center - BSC, Spain

{amanda.duarte, amaia.salvador, xavier.giro}@upc.edu

Abstract. In this work, we explore the multi-modal information pro-vided by the Youtube-8M dataset by projecting the audio and visualfeatures into a common feature space, to obtain joint audio-visual em-beddings. These links are used to retrieve audio samples that fit wellto a given silent video, and also to retrieve images that match a givenquery audio. The results in terms of Recall@K obtained over a subsetof YouTube-8M videos show the potential of this unsupervised approachfor cross-modal feature learning.

Keywords: Sonorization, embedding, retrieval, cross-modal, YouTube-8M

1 Introduction

Videos have become the next frontier in artificial intelligence. The rich semanticsmake them a challenging data type posing several challenges in both perceptual,reasoning or even computational level.

In addition to that, the popularization of deep neural networks among thecomputer vision and audio communities has defined a common framework boost-ing multi-modal research. Tasks like video sonorization, speaker impersonationor self-supervised feature learning have exploited the opportunities offered byartificial neurons to project images, text and audio in a feature space wherebridges across modalities can be built.

This work exploits the relation between the visual and audio contents ina video clip to learn a joint embedding space with deep neural networks. Wepropose a joint audiovisual space to address a retrieval task formulating a queryfrom any of the two modalities.

2 Architecture

The main objective of this research is to transform the two different featuresrepresentation (image and audio, separately) into a joint space.

Page 2: ECCV Cross-modal Embeddings for Video and Audio Retrieval · In addition to that, the popularization of deep neural networks among the computer vision and audio communities has de

045

046

047

048

049

050

051

052

053

054

055

056

057

058

059

060

061

062

063

064

065

066

067

068

069

070

071

072

073

074

075

076

077

078

079

080

081

082

083

084

085

086

087

088

089

045

046

047

048

049

050

051

052

053

054

055

056

057

058

059

060

061

062

063

064

065

066

067

068

069

070

071

072

073

074

075

076

077

078

079

080

081

082

083

084

085

086

087

088

089

ECCV

#***ECCV

#***

2 Authors Suppressed Due to Excessive Length

Fig. 1. Schematic of the used architecture.

Our model, depicted in the Figure 1, consists of two separated sets of dif-ferent sizes of fully connected layers, one for visual features and a second foraudio features. Both are trained to be mapped into the same cross-modal repre-sentation. We adopt a self-supervised approach, as we exploit the unsupervisedcorrespondence between the audio and visual tracks in any video clip. In the end,a classification from the two embeddings using a sigmoid as activation functionis performed, also using a fully connected layer.

Each hidden layer uses ReLu as activation function, and all the weights ineach layer are regularized by the L2 norm.

3 Training

The objective of this work is to get the two embeddings of the same video(visual and audio) to be as close as possible (ideally, the same), while keepingembeddings from different videos as far as possible. The notion of ”similarity” or”closeness” here is mathematically represented by the cosine similarity betweenthe embeddings.

In addition to that, inspired by the work presented in [1], we provide ad-ditional information to our system by incorporating the video labels (classes)provided by the YouTube-8M dataset. This information is added as a regu-larization term that seeks to solve the high-level classification problem, bothfrom the audio and the video embeddings, sharing the weights between the twobranches. To that end, the loss function used for the classification is the wellknown cross entropy loss. This loss is optimized together with the cosine simi-larity loss, serving as a regularization term. In another words, the system learnsto classify the audio and the images of a video (separately) into different classesor labels provided by the dataset. We limit its effect by using a regularizationparameter λ.

The features used to train our model are already precomputed and providedby the YouTube-8M dataset [2]. In particular, we use the video-level features,which represent the whole video clip with two vectors: one for the audio andanother one for the video.

Page 3: ECCV Cross-modal Embeddings for Video and Audio Retrieval · In addition to that, the popularization of deep neural networks among the computer vision and audio communities has de

090

091

092

093

094

095

096

097

098

099

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

090

091

092

093

094

095

096

097

098

099

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

ECCV

#***ECCV

#***

Cross-modal Embeddings for Video and Audio Retrieval 3

3.1 Parameters and Implementation Details

For our experiments we used the following parameters:

– Batch size of 1024.– We saw that starting with λ different than zero led to a bad embedding

similarity because the classification accuracy was preferred. Thus, we beganthe training with λ = 0 and set it to 0.02 at step number 10,000.

– Margin α = 0.2.– Percentage of negative samples pnegative = 0.6.– 4 hidden layers in each network branch, the number of neurons per layer

being, from features to embedding, 2000, 2000, 700, 700 in the image branch,and 450, 450, 200, 200 in the audio branch.

– Dimensionality of the feature vector = 250.

4 ResultsAll the experiments presented in this section were developed over a subset of6,000 video clips from the YouTube-8M dataset [2].

4.1 Quantitative Performance Evaluation

To obtain the quantitative results we use the Recall@K metric. We define Re-call@K as the recall rate at top K for all the retrieval experiments, this is, thepercentage of all the queries where the corresponding video is retrieved in thetop K, hence higher is better.

The experiments are performed with different dimensions of the feature vec-tor. The Table 1 shows the results of recall from audio to video, while the Table2 shows the recall from video to audio.

Table 1. Evaluation of Recall from audioto video

k Recall@1 Recall@5 Recall@10

256 21.5% 52.0% 63.1%512 15.2% 39.5% 52.0%1024 9.8% 30.4% 39.6%

Table 2. Evaluation of Recall from videoto audio

k Recall@1 Recall@5 Recall@10

256 22.3% 51.7% 64.4%512 14.7% 38.0% 51.5%1024 10.2% 29.1% 40.3%

4.2 Qualitative Performance Evaluation

To obtain the qualitative results, a random video was chosen and from its imageembedding, we retrieved the video with the closest audio embedding, and theother way around. In case the closest embedding retrieved corresponded to thesame video, we took the second one in the ordered list.

On the left side of Figure 2 we can see the results given a video query; andon the right the input query is an audio. Examples depicting the real videos andaudio are available online 3. For each result and each query, we also show theirYouTube-8M labels.3 https://goo.gl/NAcJah

Page 4: ECCV Cross-modal Embeddings for Video and Audio Retrieval · In addition to that, the popularization of deep neural networks among the computer vision and audio communities has de

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

ECCV

#***ECCV

#***

4 Authors Suppressed Due to Excessive Length

Fig. 2. Qualitative results. On the left we show the results obtained when we gave avideo as a query. On the right, the results are based on an audio as a query.

The results show that when starting from the image features of a video, theretrieved audio represents a very accurate fit for those images.

5 ConclusionsWe presented an simple but effective method to retrieve audio samples that fitcorrectly to a given (muted) video. The qualitative results show that the alreadyexisting online videos, due to its variety, represent a very good source of audiofor new videos, even in the case of only retrieving from a small subset of thislarge amount of data. Due to the existing difficulty of creating new audio fromscratch, we believe that a retrieval approach is the path to follow in order togive audio to videos. The source code and trained model used in this paper ispublicly available at https://github.com/surisdi/youtube-8m.

6 AcknowledgementsThis work was partially supported by the Spanish Ministry of Economy andCompetitivity and the European Regional Development Fund (ERDF) undercontract TEC2016-75976-R. Amanda Duarte was funded by the mobility grantof the Severo Ochoa Program at Barcelona Supercomputing Center (BSC-CNS).

References

1. Salvador, A., Hynes, N., Aytar, Y., Marin, J., Ofli, F., Weber, I., Torralba, A.:Learning cross-modal embeddings for cooking recipes and food images. In: CVPR.(2017)

2. Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B.,Vijayanarasimhan, S.: Youtube-8m: A large-scale video classification benchmark.CoRR abs/1609.08675 (2016)