15
Unsupervised Pixel–Level Domain Adaptation with Generative Adversarial Networks Konstantinos Bousmalis Google Brain San Francisco, CA [email protected] Nathan Silberman Google Research New York, NY [email protected] David Dohan Google Brain Mountain View, CA [email protected] Dumitru Erhan Google Brain San Francisco, CA [email protected] Dilip Krishnan Google Research Cambridge, MA [email protected] Abstract Collecting well-annotated image datasets to train mod- ern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering syn- thetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on ren- dered images often fail to generalize to real images. To ad- dress this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map repre- sentations between the two domains or learn to extract fea- tures that are domain–invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)–based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adap- tation process generalizes to object classes unseen during training. 1. Introduction Large and well–annotated datasets such as ImageNet [9], COCO [29] and Pascal VOC [12] are considered crucial to advancing computer vision research. However, creat- ing such datasets is prohibitively expensive. One alterna- tive is the use of synthetic data for model training. It has been a long-standing goal in computer vision to use game engines or renderers to produce virtually unlimited quan- tities of labeled data. Indeed, certain areas of research, (a) Image examples from the Linemod dataset. (b) Examples generated by our model, trained on Linemod. Figure 1. RGBD samples generated with our model vs real RGBD samples from the Linemod dataset [22, 46]. In each subfigure the top row is the RGB part of the image, and the bottom row is the corresponding depth channel. Each column corresponds to a spe- cific object in the dataset. See Sect. 4 for more details. such as deep reinforcement learning for robotics tasks, ef- fectively require that models be trained in synthetic do- mains as training in real–world environments can be ex- cessively expensive and time–consuming [38, 43]. Conse- quently, there has been a renewed interest in training mod- els in the synthetic domain and applying them in real–world settings [8, 48, 38, 43, 25, 32, 35, 37]. Unfortunately, mod- els naively trained on synthetic data do not typically gener- alize to real images. A common solution to this problem is using unsuper- 1 arXiv:1612.05424v1 [cs.CV] 16 Dec 2016

arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY [email protected] David Dohan Google

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Unsupervised Pixel–Level Domain Adaptationwith Generative Adversarial Networks

Konstantinos BousmalisGoogle Brain

San Francisco, [email protected]

Nathan SilbermanGoogle ResearchNew York, NY

[email protected]

David DohanGoogle Brain

Mountain View, [email protected]

Dumitru ErhanGoogle Brain

San Francisco, [email protected]

Dilip KrishnanGoogle ResearchCambridge, MA

[email protected]

Abstract

Collecting well-annotated image datasets to train mod-ern machine learning algorithms is prohibitively expensivefor many tasks. One appealing alternative is rendering syn-thetic data where ground-truth annotations are generatedautomatically. Unfortunately, models trained purely on ren-dered images often fail to generalize to real images. To ad-dress this shortcoming, prior work introduced unsuperviseddomain adaptation algorithms that attempt to map repre-sentations between the two domains or learn to extract fea-tures that are domain–invariant. In this work, we presenta new approach that learns, in an unsupervised manner, atransformation in the pixel space from one domain to theother. Our generative adversarial network (GAN)–basedmethod adapts source-domain images to appear as if drawnfrom the target domain. Our approach not only producesplausible samples, but also outperforms the state-of-the-arton a number of unsupervised domain adaptation scenariosby large margins. Finally, we demonstrate that the adap-tation process generalizes to object classes unseen duringtraining.

1. Introduction

Large and well–annotated datasets such as ImageNet [9],COCO [29] and Pascal VOC [12] are considered crucialto advancing computer vision research. However, creat-ing such datasets is prohibitively expensive. One alterna-tive is the use of synthetic data for model training. It hasbeen a long-standing goal in computer vision to use gameengines or renderers to produce virtually unlimited quan-tities of labeled data. Indeed, certain areas of research,

(a) Image examples from the Linemod dataset.

(b) Examples generated by our model, trained on Linemod.

Figure 1. RGBD samples generated with our model vs real RGBDsamples from the Linemod dataset [22, 46]. In each subfigure thetop row is the RGB part of the image, and the bottom row is thecorresponding depth channel. Each column corresponds to a spe-cific object in the dataset. See Sect. 4 for more details.

such as deep reinforcement learning for robotics tasks, ef-fectively require that models be trained in synthetic do-mains as training in real–world environments can be ex-cessively expensive and time–consuming [38, 43]. Conse-quently, there has been a renewed interest in training mod-els in the synthetic domain and applying them in real–worldsettings [8, 48, 38, 43, 25, 32, 35, 37]. Unfortunately, mod-els naively trained on synthetic data do not typically gener-alize to real images.

A common solution to this problem is using unsuper-

1

arX

iv:1

612.

0542

4v1

[cs

.CV

] 1

6 D

ec 2

016

Page 2: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

vised domain adaptation. In this setting, we would liketo transfer knowledge learned from a source domain, forwhich we have labeled data, to a target domain, for whichwe have no ground truth labels. Previous work eitherattempts to find a mapping from representations of thesource domain to those of the target [41], or seeks to findrepresentations that are shared between the two domains[14, 44, 31, 5], such that the features are invariant to the do-main from which they are extracted. Ideally, such domain-invariant features would prevent a model trained on sourcedata from concentrating on aspects not also found in thetarget domain. While such approaches have shown goodprogress, they are still not on par with purely supervisedapproaches trained only on the target domain.

In this work, we train a model to change the imagesfrom the source domain to appear as if they were sampledfrom the target domain while maintaining their original con-tent. We propose a novel Generative Adversarial Network(GAN)–based architecture that is able to learn such a trans-formation in an unsupervised manner, i.e. without usingcorresponding pairs from the two domains. Our method of-fers a number of advantages over existing domain adapta-tion approaches:Decoupling from the Task-Specific Architecture: Inmost domain adaptation approaches, the process of domainadaptation and the task-specific architecture inference aretightly integrated. One cannot switch a task–specific com-ponent of the model without having to re-train the entiredomain adaptation process. In contrast, because our modelmaps one image to another at the pixel level, practitionerscan alter the task-specific architecture without having to re-train the domain adaptation component.Generalization Across Label Spaces: Because previousmodels couple domain adaptation with a specific task, thelabel spaces in the source and target domain are constrainedto match. In contrast, our model is able to handle caseswhere the target label space at test time differs from thelabel space at training time.Training Stability: Domain adaptation approaches thatrely on some form of adversarial training [5, 14] are sen-sitive to random initialization. To address this, we incorpo-rate a task–specific loss trained on both source and gener-ated images and a pixel similarity regularization that allowsus to avoid mode collapse [40] and stabilize training. Byusing these tools, we are able to reduce variance of perfor-mance for the same hyperparameters across different ran-dom initializations of our model (see section 4). Note thatincorporating the task–specific loss at training time doesn’taffect our ability to use a different task–specific classifier ora different label space during test time.Data Augmentation: Conventional domain adaptationapproaches are limited to learning from a finite set of sourceand target data. However, by conditioning on both source

images and a stochastic noise vector, our model can be usedto create virtually unlimited stochastic samples that appearsimilar to images from the target domain.

Interpretability: The output of the model, a domain–adapted image, is much more easily interpreted than a do-main adapted feature vector.

To demonstrate the efficacy of our strategy, we focus onthe tasks of object classification and pose estimation, wherethe object of interest is in the foreground of a given image,for both source and target domains. Our method outper-forms the state-of-the-art unsupervised domain adaptationtechniques on a range of datasets for object classificationand pose estimation, while generating images that look verysimilar to the target domain (see Figure 1).

2. Related Work

Learning to perform unsupervised domain adaptation isan open theoretical and practical problem. While muchprior work exists, our literature review focuses primarily onConvolutional Neural Network (CNN)–based methods dueto their empirical superiority on this problem [14, 31, 41,45].

Unsupervised Domain Adaptation: Ganin et al. [13, 14]and Ajakan et al. [3] introduced the Domain–AdversarialNeural Network (DANN): an architecture trained to extractdomain-invariant features. Their model’s first few layersare shared by two classifiers: the first predicts task-specificclass labels when provided with source data while the sec-ond is trained to predict the domain of its inputs. DANNsminimize the domain classification loss with respect to pa-rameters specific to the domain classifier, while maximizingit with respect to the parameters that are common to bothclassifiers. This minimax optimization becomes possible ina single step via the use of a gradient reversal layer. WhileDANN’s approach to domain adaptation is to make the fea-tures extracted from both domains similar, our approach isto adapt the source images to look as if they were drawnfrom the target domain.

Tzeng et al. [45] and Long et al. [31] proposed versionsof DANNs where the maximization of the domain classifi-cation loss is replaced by the minimization of the MaximumMean Discrepancy (MMD) metric [20], computed betweenfeatures extracted from sets of samples from each domain.

Bousmalis et al. [5] introduce a model that explicitly sep-arates the components that are private to each domain fromthose that are common to both domains. They make useof a reconstruction loss for each domain, a similarity losswhich encourages domain invariance and a difference losswhich encourages the common and private representationcomponents to be complements of each other.

Other related techniques involve learning a mappingfrom one domain to the other at a feature level. In such a

2

Page 3: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

setup, the feature extraction pipeline is fixed during the do-main adaptation optimization. This has been applied in var-ious non-CNN based approaches [17, 6, 19] as well as themore recent CNN-based Correlation Alignment (CORAL)[41] algorithm.

Generative Adversarial Networks: Our model usesGenerative Adversarial Networks (GANs) [18] conditionedon source images and noise vector. Other recent workshave also attempted to use GANs conditioned on images.Ledig et al. [28] used an image-conditioned GAN for super-resolution. Yoo et al. [47] introduce the task of generatingimages of clothes from images of models wearing them, bytraining on corresponding pairs of the clothes worn by mod-els and on a hanger. In contrast to our work, neither methodconditions on both images and noise vectors, and ours isalso applied to an entirely different problem space.

The work perhaps most similar to ours is that of Liu andTuzel [30] who introduce an architecture of a pair of cou-pled GANs, one for the source and one for the target do-main, whose generators share their high-layer weights andwhose discriminators share their low-layer weights. In thismanner, they are able to generate corresponding pairs ofimages which can be used for unsupervised domain adap-tation. The success of this approach, however, is contingenton the ability to generate high quality samples from noisealone.

Style Transfer: The popular work of Gatys et al. [15][16] introduced a method of style transfer, in which thestyle of one image is transferred to another while holdingthe content fixed. While highly novel, the process is slowas it requires backpropagating back to the pixels. Johnsonet al. [24] introduce a model for feed forward style transfer.They train a network conditioned on an image to producean output image whose activations on a pre-trained modelare similar to both the input image (high-level content ac-tivations) and a single target image (low-level style activa-tions). However, both of these approaches are optimized toreplicate the style of a single image as opposed to our workwhich seeks to replicate the style of an entire domain ofimages.

3. Model

We begin by explaining our model in the context of im-age classification, though our method is not specific to thisparticular task. Given a labeled dataset in a source domainand an unlabeled dataset in a target domain, our goal is totrain a classifier on data from the source domain that gen-eralizes to the target domain. Previous work performs thistask using a single network that performs both domain adap-tation and image classification, making the domain adap-tation process specific to the classifier architecture. Ourmodel decouples the process of domain adaptation from the

process of task-specific classification, as its primary func-tion is to adapt images from the source domain to makethem appear as if they were sampled from the target do-main. Once adapted, any off-the-shelf classifier, e.g. Incep-tion V3 [42], ResNets [21], can be trained to perform thetask at hand as if no domain adaptation were required.

More formally, let Xs = {xsi ,y

si}N

s

i=0 represent a labeleddataset ofNs samples from the source domain and let Xt ={xt

i}Nt

i=0 represent an unlabeled dataset of N t samples fromthe target domain. Our pixel adaptation model consists ofa generator function G(xs, z;θG)→ xf , parameterized byθG, that maps a source domain image xs ∈ Xs and a noisevector z ∼ pz to an adapted, or fake, image xf . Given thegenerator function G, it is possible to create a new datasetXf = {G(xs, z),ys} of any size. Finally, given an adapteddataset Xf , the task-specific classifier can be trained as ifthe training and test data were sampled from the same dis-tribution.

3.1. Learning

To train our model, we employ a generative adversarialobjective to encourage G to produce images that are similarto the target domain images. During training, our gener-ator G(xs, z;θG) → xf maps a source image xs and anoise vector z to an adapted image xf . Furthermore, themodel is augmented by a discriminator function D(x;θD)that outputs the likelihood d that a given image x has beensampled from the target domain. The discriminator tries todistinguish between ‘fake’ images Xf produced by the gen-erator, and ‘real’ images from the target domain Xt. Notethat in contrast to the standard GAN formulation [18] inwhich the generator is conditioned only on a noise vector,our model’s generator is conditioned on both a noise vec-tor and an image from the source domain. In addition tothe discriminator, the model is also augmented with a clas-sifier T (x;θT )→ y which assigns task-specific labels y toimages x ∈ {Xf ,Xt}.

Our goal is to optimize the following minimax objective:

minθG,θT

maxθD

αLd(D,G) + βLt(G,T ) (1)

where α and β are weights that control the interaction of thelosses. Ld represents the domain loss:

Ld(D,G) = Ext [logD(xt;θD)]+

Exs,z[log(1−D(G(xs, z;θG);θD))] (2)

Lt is a task-specific loss, and in the case of classification weuse a typical softmax cross–entropy loss:

Lt(G,T ) = Exs,ys,z

[− ys> log T (G(xs, z;θG);θT )

− ys> log T (xs);θT

](3)

where ys is the one-hot encoding of the class label forsource input xs. Notice that we train T with both adapted

3

Page 4: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

xfxs

Discriminator D

Generator G

D

real

fake

xf (fake)

xs (synthetic)

G

z (noise)

xt (real)

T

real

fake

zfc

Residual Block

Res

idua

l Blo

ck

Res

idua

l Blo

ck

relu

n3s1

n64s

1

tanh

BN

fc:s

igm

oid

lrelu

n128

s2

n256s2

BN

lrelu

conv

n512

s2

n102

4s2

n64s

1

y

n64s

1

n64s

1re

luB

N

BN

Figure 2. An overview of the model architecture. On the left, we depict the overall model architecture following the style in [34]. On theright, we expand the details of the generator and the discriminator components. The generator G generates an image conditioned on asynthetic image xs and a noise vector z. The discriminator D discriminates between real and fake images. The task–specific classifier Tassigns task–specific labels y to an image. A convolution with stride 1 and 64 channels is indicated as n64s1 in the image. lrelu stands forleaky ReLU nonlinearity. BN stands for a batch normalization layer and FC for a fully connected layer. Note that we are not displayingthe specifics of T as those are different for each task and decoupled from the domain adaptation process.

and non-adapted source images. When training T onlyon adapted images, it’s possible to achieve similar perfor-mance, but doing so may require many runs with differentinitializations due to the instability of the model. Indeed,without training on source as well, the model is free to shiftclass assignments (e.g. class 1 becomes 2, class 2 becomes3 etc) while still being successful at optimizing the trainingobjective. We have found that training classifier T on bothsource and adapted images avoids this scenario and greatlystabilizes training (See Table 5). Finally, it’s important toreiterate that once trained, we are free to adapt other imagesfrom the source domain which might use a different labelspace (See Table 4).

In our implementation, G is a convolutional neural net-work with residual connections that maintains the resolu-tion of the original image as illustrated in figure 2. Our dis-criminator D is also a convolutional neural network. Theminimax optimization of Equation 1 is achieved by alter-nating between two steps. During the first step, we up-date the discriminator and task-specific parameters θD,θT ,while keeping the generator parameters θG fixed. Duringthe second step we fix θD,θT and update θG.

3.2. Content–similarity loss

In certain cases, we have prior knowledge regarding thelow-level image adaptation process. For example, we mayexpect the hues of the source and adapted images to be thesame. In our case, we render single objects on black back-

grounds and consequently we expect images adapted fromthese renderings to have similar foregrounds and differentbackgrounds from the equivalent source images. Render-ers typically provide access to z-buffer masks that allow usto differentiate between foreground and background pixels.This prior knowledge can be formalized via the use of an ad-ditional loss that penalizes large differences between sourceand generated images for foreground pixels only. Such asimilarity loss grounds the generation process to the origi-nal image and helps stabilize the minimax optimization, asshown in Sect. 4.4 and Table 5. Our optimization objectivethen becomes:

minθG,θT

maxθD

αLd(D,G) + βLt(T,G) + γLc(G) (4)

where α, β, and γ are weights that control the interaction ofthe losses, and Lc is the content–similarity loss.

We use a masked pairwise mean squared error, whichis a variation of the pairwise mean squared error (PMSE)[11]. This loss penalizes differences between pairs of pix-els rather than absolute differences between inputs and out-puts. Our masked version calculates the PMSE between thegenerated foreground and the source foreground. Formally,given a binary mask m ∈ Rk, our masked-PMSE loss is:

Lc(G) = Exs,z

[1k‖(xs −G(xs, z;θG)) ◦m‖22

− 1

k2((xs −G(xs, z;θG))

>m)2 ]

(5)

4

Page 5: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Figure 3. Visualization of our model’s ability to generate sampleswhen trained to adapt MNIST to MNIST-M. (a) Source images xs

from MNIST; (b) The samples adapted with our model G(xs, z)with random noise z; (c) The nearest neighbors in the MNIST-Mtraining set of the generated samples in the middle row. Differ-ences between the middle and bottom rows suggest that the modelis not memorizing the target dataset.

where k is the number of pixels in input x, ‖ · ‖22 is thesquared L2-norm, and ◦ is the Hadamard product. This lossallows the model to learn to reproduce the overall shape ofthe objects being modeled without wasting modeling poweron the absolute color or intensity of the inputs, while al-lowing our adversarial training to change the object in aconsistent way. Note that the loss does not hinder the fore-ground from changing but rather encourages the foregroundto change in a consistent way. In this work, we apply amasked PMSE loss for a single foreground object becauseof the nature of our data, but one can trivially extend this tomultiple foreground objects.

4. Evaluation

We evaluate our method on object classification datasetsused in previous work1, including MNIST, MNIST-M [14],and USPS [10] as well as a variation of the LINEMODdataset [22, 46], a standard for object instance recognitionand 3D pose estimation, for which we have synthetic andreal data. Our evaluation is composed of qualitative andquantitative components, using a number of unsuperviseddomain adaptation scenarios. The qualitative evaluation in-volves the examination of the ability of our method to learnthe underlying pixel adaptation process from the source tothe target domain by visually inspecting the generated im-ages. The quantitative evaluation involves a comparisonof the performance of our model to previous work and to“Source Only” and “Target Only” baselines that do not useany domain adaptation. In the first case, we train mod-els only on the unaltered source training data and evalu-ate on the target test data. In the “Target Only” case wetrain task models on the target domain training set only andevaluate on the target domain test set. The unsuperviseddomain adaptation scenarios we consider are listed below:MNIST to USPS: Images of the 10 digits (0-9) from theMNIST [27] dataset are used as the source domain and im-

1The most commonly used dataset for visual domain adaptation in thecontext of object classification is Office [39]. However, we do not useit in this work as there are significant high–level variations due to labelpollution. For more information, see the relevant explanation in [5].

ages of the same 10 digits from the USPS [10] dataset repre-sent the target domain. To ensure a fair comparison betweenthe “Source–Only” and domain adaptation experiments, wetrain our models on a subset of 50,000 images from the orig-inal 60,000 MNIST training images. The remaining 10,000images are used for cross-validation of the “Source–Only”experiment. The standard splits for USPS, comprising of6,562 training, 729 validation, and 2,007 test images areused.

MNIST to MNIST-M: MNIST [27] digits represent thesource domain and MNIST-M [14] digits represent the tar-get domain. MNIST-M is a variation on MNIST proposedfor unsupervised domain adaptation. Its images were cre-ated by using each MNIST digit as a binary mask and in-verting with it the colors of a background image. The back-ground images are random crops uniformly sampled fromthe Berkeley Segmentation Data Set (BSDS500) [4]. Allour experiments follow the experimental protocol by [14].We use the labels for 1,000 out of the 59,001 MNIST-Mtraining examples to find optimal hyperparameters for ourmodels.

Synthetic Cropped LineMod to Cropped LineMod:The LineMod dataset [22] is a dataset of small objects incluttered indoor settings imaged in a variety of poses. Weuse a cropped version of the dataset [46], where each imageis cropped with one of 11 objects in the center. The 11 ob-jects used are ‘ape’, ‘benchviseblue’, ‘can’, ‘cat’, ‘driller’,‘duck’, ‘holepuncher’, ‘iron’, ‘lamp’, ‘phone’, and ‘cam’.A second component of the dataset consists of CAD mod-els of these same 11 objects in a large variety of poses ren-dered on a black background, which we refer to as SyntheticCropped LineMod. We treat Synthetic Cropped LineMod asthe source dataset and the real Cropped LineMod as the tar-get dataset. We train our model on 109,208 rendered sourceimages and 9,673 real-world target images for domain adap-tation, 1,000 for validation, and a target domain test set of2,655 for testing. Using this scenario, our task involves bothclassification and pose estimation. Consequently, our task–specific network T (x;θT )→ {y, q} outputs both a class yand a 3D pose estimate in the form of a positive unit quater-nion vector q. The task loss becomes:

Lt(G,T ) =

Exs,ys,z

[− ys> log ys − ys> log yf+

ξ log(1−

∣∣∣qs>qs∣∣∣)+ ξ log

(1−

∣∣∣qs>qf∣∣∣) ] (6)

where the first and second terms are the classification loss,similar to Equation 3, and the third and fourth terms arethe log of a 3D rotation metric for quaternions [23]. ξis the weight for the pose loss, qs represents the groundtruth 3D pose of a sample, {ys, qs} = T (xs;θT ),{yf , qf} = T (G(xs, z;θG);θT ). Table 2 reports the meanangle the object would need to be rotated (on a fixed

5

Page 6: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Figure 4. Visualization of our model’s ability to generate samples when trained to adapt Synth Cropped Linemod to Cropped Linemod. TopRow: Source RGB and Depth image pairs from Synth Cropped LineMod xs; Middle Row: The samples adapted with our model G(xs, z)with random noise z; Bottom Row: The nearest neighbors between the generated samples in the middle row and images from the targettraining set. Differences between the generated and target images suggest that the model is not memorizing the target dataset.

3D axis) to move from the predicted to the ground truthpose [22].

4.1. Implementation Details

All the models are implemented using TensorFlow2 [1]and are trained with the Adam optimizer [26]. We opti-miz the objective in Equation 1 for “MNIST to USPS” and“MNIST to MNIST-M” scenarios and the one in Equation 4for the “Synthetic Cropped Linemod to Cropped Linemod”scenario. We use batches of 32 samples from each do-main and the input images are zero-centered and rescaledto [−1, 1]. In our implementation, we let G take the form ofa convolutional residual neural network that maintains theresolution of the original image as shown in Figure 2. zis a vector of Nz elements, each sampled from a uniformdistribution zi ∼ U(−1, 1). It is fed to a fully connectedlayer which transforms it to a channel of the same reso-lution as that of the image channels, and is subsequentlyconcatenated to the input as an extra channel. In all our ex-periments we use a z withNz = 10. The discriminatorD isa convolutional neural network where the number of layersdepends on the image resolution: the first layer is a stride1x1 convolution (motivated by [33]), which is followed byrepeatedly stacking stride 2x2 convolutions until we reducethe resolution to less or equal to 4x4. The number of fil-ters is 64 in all layers of G, and is 64 in the first layer of Dand repeatedly doubled in subsequent layers. The output ofthis pyramid is fed to a fully–connected layer with a singleactivation for the domain classification loss. 3 For all ourexperiments, the CNN topologies used for the task classifierT are identical to the ones used in [14, 5] to be comparableto previous work in unsupervised domain adaptation.

2We plan to open source our code once author feedback is released.3All details of our architecture are included in the supplementary ma-

terial.

4.2. Quantitative Results

We have not found a universally applicable way to opti-mize hyperparameters for unsupervised domain adaptation.Consequently, we follow the experimental protocol of [5]and use a small set (∼1,000) of labeled target domain dataas a validation set for the hyperparameters of all the meth-ods we compare. We perform all experiments using thesame protocol to ensure fair and meaningful comparison.The performance on this validation set can serve as an upperbound of a satisfactory validation metric for unsuperviseddomain adaptation. To our knowledge, validating param-eters in an unsupervised manner is still an open researchquestion, and is out of the scope of this work. However, aswe discuss in section 4.5, we evaluate our model in a semi-supervised setting where we have 1,000 labeled examples inthe target domain, to confirm that our model is still able toimprove upon the naive approach of training on this smallset of target labeled examples.

We evaluate our model using the aforementioned combi-nations of source and target datasets, and compare the per-formance of our model’s task architecture T to that of otherstate-of-the-art unsupervised domain adaptation techniquesbased on the same task architecture T . As mentioned above,in order to evaluate the efficacy of our model, we first com-pare with the accuracy of models trained in a “Source Only”setting for each domain adaptation scenario. This settingrepresents a lower bound on performance. Next we comparemodels in a “Target Only” setting for each scenario. Thissetting represents a weak upper bound on performance—asit is conceivable that a good unsupervised domain adapta-tion model might improve on these results, as we do in thiswork for “MNIST to MNIST-M”.

Quantitative results of these comparisons are presentedin Tables 1 and 2. Our method is able to not just achievebetter results than previous work on the “MNIST to MNIST-M” scenario, it is also able to outperform the “Target Only”

6

Page 7: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Table 1. Mean classification accuracy (%) for digit datasets. The“Source-only” and “Target-only” rows are the results on the targetdomain when using no domain adaptation and training only on thesource or the target domain respectively. We note that our Sourceand Target only baselines resulted in different numbers than previ-ously published works which we also indicate in parenthesis.

Model MNIST to MNIST toUSPS MNIST-M

Source Only 78.9 63.6 (56.6)CORAL [41] 81.7 57.7MMD [45, 31] 81.1 76.9DANN [14] 85.1 77.4DSN [5] 91.3 83.2CoGAN [30] 91.2 62.0Our model 95.9 98.2Target-only 96.5 96.4 (95.9)

performance we are able to get with the same task classifierT . Furthermore, we are also able to achieve state-of-theart results for the “MNIST to USPS” scenario. Finally, ourmodel is able to reduce the mean angle error for the “SynthCropped Linemod to Cropped Linemod” scenario to morethan half compared to the previous state-of-the-art.

4.3. Qualitative ResultsThe qualitative results of our model are illustrated in fig-

ures 1, 3, and 4. In figures 3 and 4 one can see the visualiza-tion of the generation process, as well as the nearest neigh-bors of our generated samples in the target domain. In bothscenarios, it is clear that our method is able to learn the un-derlying transformation process that is required to adapt theoriginal source images to images that look like they couldbelong in the target domain. As a reminder, the MNIST-Mdigits have been generated by using MNIST digits as a bi-nary mask to invert the colors of a background image. It isclear from figure 3 that in the “MNIST to MNIST-M” case,our model is able to not only generate backgrounds fromdifferent noise vectors z, but it is also able to learn this in-version process. This is clearly evident from e.g. digits 3and 6 in the figure. In the “Synthetic Cropped Linemod toCropped Linemod” case, our model is able to sample, in theRGB channels, realistic backgrounds and adjust the pho-tometric properties of the foreground object. In the depthchannel it is able to learn a plausible noise model.

4.4. Model AnalysisWe present a number of additional experiments that

demonstrate how the model works and to explore potentiallimitations of the model.Sensitivity to Used Backgrounds In both the “MNIST toMNIST-M” and “Synthetic-Cropped LineMod to CroppedLineMod” scenarios, the source domains are images of dig-its or objects on black backgrounds. Our quantitative eval-uation (Tables 1 and 2) illustrates the ability of our modelto adapt the source images to the target domain style but

Table 2. Mean classification accuracy and pose error for the “SynthCropped Linemod to Cropped Linemod” scenario.

Model Classification Mean AngleAccuracy Error

Source-only 47.33% 89.2◦

MMD [45, 31] 72.35% 70.62◦

DANN [14] 99.90% 56.58◦

DSN [5] 100.00% 53.27◦

Our model 99.98% 23.5◦

Target-only 100.00% 6.47◦

Table 3. Mean classification accuracy and pose error when vary-ing the background of images from the source domain. For theseexperiments we used only the RGB portions of the images, asthere is no trivial or typical way with which we could have addedbackgrounds to depth images. For comparison, we display resultswith black backgrounds and Imagenet backgrounds (INet), withthe “Source Only” setting and with our model for the RGB-onlycase.

Model–RGB-only Classification Mean AngleAccuracy Error

Source-Only–Black 47.33% 89.2◦

Source-Only–INet 91.15% 50.18◦

Our Model–Black 94.16% 55.74◦

Our Model–INet 96.95% 36.79◦

raises two questions: Is it important that the backgroundsof the source images are black and how successful are data-augmentation strategies that use a randomly chosen back-ground image instead? To that effect we ran additionalexperiments where we substituted various backgrounds inplace of the default black background for the SysntheticCropped Linemod dataset. The backgrounds are randomlyselected crops of images from the ImageNet dataset. Inthese experiments we used only the RGB portion of the im-ages —for both source and target domains— since we don’thave equivalent “backgrounds” for the depth channel. Asdemonstrated in Table 3, substituting various backgroundsfor black makes little to no difference in the ultimate resultsof the model. This is consistent with the observations ofprior work in which the use of random backgrounds onlyworked when coupled with a high degree of manual tuningto ensure that the lighting of the backgrounds and objectswere consistent and the object boundary discontinuities didnot appear overly synthetic.

Generalization of the Model Two additional aspects ofthe model are relevant to understanding its performance.Firstly, is the model actually learning a successful pixel-level data adaptation process, or is it simply memorizing thetarget images and replacing the source images with imagesfrom the target training set? Secondly, is the model able togeneralize about the two domains in a fashion not limited tothe classes of objects seen during training?

7

Page 8: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Table 4. Performance of our model trained on only 6 out of 11Linemod objects. The first row, ‘Unseen Classes,’ displays the per-formance on all the samples of the remaining 5 Linemod objectsnot seen during training. The second row, ‘Full test set,’ displaysthe performance on the target domain test set for all 11 objects.

Test Set Classification Mean AngleAccuracy Error

Unseen Classes 98.98% 31.69◦

Full test set 99.28% 32.37◦

To answer the first question, we first run our generatorG on images from the source images to create an adapteddataset. Next, for each transferred image, we perform apixel-space L2 nearest neighbor lookup in the target train-ing images to determine whether the model is simply mem-orizing images from the target dataset or not. Illustrationsare shown in figures 3 and 4, where the top rows are samplesfrom xs, the middle rows are generated samples G(xs, z),and the bottom rows are the nearest neighbors of the gener-ated samples in the target training set. It is clear from thefigures that the model is not memorizing images from thetarget training set.

Next, we evaluate our model’s ability to generalize toclasses unseen during training. To do so, we retrain our bestmodel using a subset of images from the source and targetdomains which includes only half of the object classes forthe “Synthetic Cropped Linemod” to “Cropped Linemod”scenario. Specifically, the objects ‘ape’, ‘benchviseblue’,‘can’, ‘cat’, ‘driller’, and ‘duck’ are observed during thetraining procedure, and the objects ‘holepuncher’, ‘iron’,‘lamp’, ‘phone’, ‘cam’ are only used during testing. OnceG is trained, we fix its weights and pass the full training setof the source domain to generate images used for trainingthe task-classifier T . We then evaluate the performance ofT on the entire set of unobserved objects (6,060 samples),and the test set of the target domain for all objects for directcomparison with Table 2.

Stability Study We also evaluate the importance of thedifferent components of our model. We demonstrate thatwhile the task and content losses do not improve the over-all performance of the model, they dramatically stabilizetraining. Training instability is a common characteristicof adversarial training, necessitating various strategies todeal with model divergence and mode collapse [40]. Wemeasure the standard deviation of the performance of ourmodels by running each model 10 times with different ran-dom parameter initialization but with the same hyperparam-eters. Table 5 illustrates that the use of the task and content–similarity losses reduces the level of variability across runsand provides more reproducible performance.

Table 5. The effect of using the task and content losses Lt, Lc onthe standard deviation (std) of the performance of our model on the“Synth Cropped Linemod to Linemod” scenario. Lsource

t meanswe use source data to train T ; Ladapted

t means we use generateddata to train T ; Lc means we use our content–similarity loss. Alower std on the performance metrics means that the results aremore easily reproducible.

Lsourcet Ladapted

t LcClassification Mean AngleAccuracy std Error std

- - - 23.26 16.33- X - 22.32 17.48X X - 2.04 3.24X X X 1.60 6.97

Table 6. Semi-supervised experiments for the “Synthetic CroppedLinemod to Cropped Linemod” scenario. When a small set of1,000 target data is available to our model, it is able to improveupon baselines trained on either just these 1,000 samples or thesynthetic training set augmented with these labeled target samples.

Method Classification Mean AngleAccuracy Error

1000-only 99.51% 25.26◦

Synth+1000 99.89% 23.50◦

Our model 99.93% 13.31◦

4.5. Semi-supervised Experiments

Finally, we evaluate the usefulness of our model in asemi–supervised setting, in which we assume we have asmall number of labeled target training examples. Thesemi-supervised version of our model simply uses these ad-ditional training samples as extra input to classifier T dur-ing training. We sample 1,000 examples from the CroppedLinemod not used in any previous experiment and use themas additional training data. We evaluate the semi-supervisedversion of our model on the test set of the Cropped Linemodtarget domain against the 2 following baselines: (a) traininga classifier only on these 1,000 target samples without anydomain adaptation, a setting we refer to as ‘1,000-only’; and(b) training a classifier on these 1,000 target samples andthe entire Synthetic Cropped Linemod training set with nodomain adaptation, a setting we refer to as ‘Synth+1000’.As one can see from Table 6 our model is able to greatlyimprove upon the naive setting of incorporating a few tar-get domain samples during training. We also note that ourmodel is able to use these results to achieve an even bet-ter performance than in the completely unsupervised settingfor the “Synthetic Cropped Linemod to Cropped Linemod”scenario (Table 2).

5. ConclusionWe present a state-of-the-art method for performing un-

supervised domain adaptation. Our models outperformprevious work on a set of unsupervised domain adapta-tion scenarios, and in the case of the challenging “Syn-thetic Cropped Linemod to Cropped Linemod” scenario,

8

Page 9: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

our model more than halves the error for pose estimationcompared to the previous best result. They are able to doso by using a GAN–based technique, stabilized by both atask-specific loss and a novel content–similarity loss. Fur-thermore, our model decouples the process of domain adap-tation from the task-specific architecture, and provides theadded benefit of being easy to understand via the visualiza-tion of the adapted image outputs of the model.

References[1] M. Abadi et al. Tensorflow: Large-scale machine

learning on heterogeneous distributed systems. PreprintarXiv:1603.04467, 2016.

[2] D. B. F. Agakov. The im algorithm: a variational approach toinformation maximization. In Advances in Neural Informa-tion Processing Systems 16: Proceedings of the 2003 Con-ference, volume 16, page 201. MIT Press, 2004.

[3] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, andM. Marchand. Domain-adversarial neural networks. InPreprint, http://arxiv.org/abs/1412.4446, 2014.

[4] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Con-tour detection and hierarchical image segmentation. TPAMI,33(5):898–916, 2011.

[5] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, andD. Erhan. Domain separation networks. In Proc. NeuralInformation Processing Systems (NIPS), 2016.

[6] R. Caseiro, J. F. Henriques, P. Martins, and J. Batist. Be-yond the shortest path: Unsupervised Domain Adaptation bySampling Subspaces Along the Spline Flow. In CVPR, 2015.

[7] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever,and P. Abbeel. Infogan: Interpretable representation learn-ing by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657, 2016.

[8] P. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Black-well, J. Tobin, P. Abbeel, and W. Zaremba. Transfer fromsimulation to real world through learning deep inverse dy-namics model. arXiv preprint arXiv:1610.03518, 2016.

[9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei.ImageNet: A Large-Scale Hierarchical Image Database. InCVPR09, 2009.

[10] J. S. Denker, W. Gardner, H. P. Graf, D. Henderson,R. Howard, W. E. Hubbard, L. D. Jackel, H. S. Baird, andI. Guyon. Neural network recognizer for hand-written zipcode digits. In NIPS, pages 323–331, 1988.

[11] D. Eigen, C. Puhrsch, and R. Fergus. Depth map predictionfrom a single image using a multi-scale deep network. InNIPS, pages 2366–2374, 2014.

[12] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams,J. Winn, and A. Zisserman. The pascal visual object classeschallenge: A retrospective. International Journal of Com-puter Vision, 111(1):98–136, 2015.

[13] Y. Ganin and V. Lempitsky. Unsupervised domain adaptationby backpropagation. arXiv preprint arXiv:1409.7495, 2014.

[14] Y. Ganin et al. Domain-Adversarial Training of Neural Net-works. JMLR, 17(59):1–35, 2016.

[15] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithmof artistic style. arXiv preprint arXiv:1508.06576, 2015.

[16] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transferusing convolutional neural networks. In Proceedings of theIEEE Conference on Computer Vision and Pattern Recogni-tion, pages 2414–2423, 2016.

[17] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flowkernel for unsupervised domain adaptation. In CVPR, pages2066–2073. IEEE, 2012.

[18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen-erative adversarial nets. In Advances in Neural InformationProcessing Systems, pages 2672–2680, 2014.

[19] R. Gopalan, R. Li, and R. Chellappa. Domain Adaptation forObject Recognition: An Unsupervised Approach. In ICCV,2011.

[20] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Scholkopf,and A. Smola. A Kernel Two-Sample Test. JMLR, pages723–773, 2012.

[21] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn-ing for image recognition. In Computer Vision and PatternRecognition (CVPR), 2016 IEEE Conference on, 2016.

[22] S. Hinterstoisser et al. Model based training, detection andpose estimation of texture-less 3d objects in heavily clutteredscenes. In ACCV, 2012.

[23] D. Q. Huynh. Metrics for 3d rotations: Comparison andanalysis. Journal of Mathematical Imaging and Vision,35(2):155–164, 2009.

[24] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses forreal-time style transfer and super-resolution. arXiv preprintarXiv:1603.08155, 2016.

[25] M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar,and R. Vasudevan. Driving in the matrix: Can virtual worldsreplace human-generated annotations for real world tasks?arXiv preprint arXiv:1610.01983, 2016.

[26] D. Kingma and J. Ba. Adam: A method for stochastic opti-mization. arXiv preprint arXiv:1412.6980, 2014.

[27] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceed-ings of the IEEE, 86(11):2278–2324, 1998.

[28] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Aitken, A. Te-jani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single im-age super-resolution using a generative adversarial network.arXiv preprint arXiv:1609.04802, 2016.

[29] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra-manan, P. Dollar, and C. L. Zitnick. Microsoft coco: Com-mon objects in context. In European Conference on Com-puter Vision, pages 740–755. Springer, 2014.

[30] M.-Y. Liu and O. Tuzel. Coupled generative adversarial net-works. arXiv preprint arXiv:1606.07536, 2016.

[31] M. Long and J. Wang. Learning transferable features withdeep adaptation networks. ICML, 2015.

[32] A. Mahendran, H. Bilen, J. Henriques, and A. Vedaldi. Re-searchdoom and cocodoom: Learning computer vision withgames. arXiv preprint arXiv:1610.02431, 2016.

[33] A. Odena, V. Dumoulin, and C. Olah. Deconvolutionand checkerboard artifacts. http://distill.pub/2016/deconv-checkerboard/, 2016.

9

Page 10: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

[34] A. Odena, C. Olah, and J. Shlens. Conditional Image Syn-thesis With Auxiliary Classifier GANs. ArXiv e-prints, Oct.2016.

[35] W. Qiu and A. Yuille. Unrealcv: Connecting computer visionto unreal engine. arXiv preprint arXiv:1609.01326, 2016.

[36] A. Radford, L. Metz, and S. Chintala. Unsupervised repre-sentation learning with deep convolutional generative adver-sarial networks. CoRR, abs/1511.06434, 2015.

[37] S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playingfor data: Ground truth from computer games. In EuropeanConference on Computer Vision, pages 102–118. Springer,2016.

[38] A. A. Rusu, M. Vecerik, T. Rothorl, N. Heess, R. Pascanu,and R. Hadsell. Sim-to-real robot learning from pixels withprogressive nets. arXiv preprint arXiv:1610.04286, 2016.

[39] K. Saenko et al. Adapting visual category models to newdomains. In ECCV. Springer, 2010.

[40] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Rad-ford, and X. Chen. Improved techniques for training gans.arXiv preprint arXiv:1606.03498, 2016.

[41] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easydomain adaptation. In AAAI. 2016.

[42] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.Rethinking the inception architecture for computer vision.arXiv preprint arXiv:1512.00567, 2015.

[43] E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine,K. Saenko, and T. Darrell. Towards adapting deep visuo-motor representations from simulated to real environments.arXiv preprint arXiv:1511.07111, 2015.

[44] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultane-ous deep transfer across domains and tasks. In CVPR, pages4068–4076, 2015.

[45] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell.Deep domain confusion: Maximizing for domain invariance.Preprint arXiv:1412.3474, 2014.

[46] P. Wohlhart and V. Lepetit. Learning descriptors for objectrecognition and 3d pose estimation. In CVPR, pages 3109–3118, 2015.

[47] D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixel-level domain transfer. arXiv preprint arXiv:1603.07442,2016.

[48] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation inindoor scenes using deep reinforcement learning. arXivpreprint arXiv:1609.05143, 2016.

10

Page 11: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Supplementary MaterialA. Additional Generated Images

Figure 1. Linear interpolation between two random noise vectors demonstrates that the model is able to separate out style from content inthe MNIST-M dataset. Each row is generated from the same MNIST digit, and each column is generated with the same noise vector.

1

Page 12: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

B. Model Architectures and Parameters

We present the exact model architectures used for each experiment along with hyperparameters needed to reproduceresults. The general form for the generator, G, and discriminator, D, are depicted in Figure 2 of the paper. For G, we vary thenumber of filters and the number of residual blocks. For D, we vary the amount of regularization and the number of layers.

Optimization consists of alternating optimization of the discriminator and task classifier parameters, referred to as the Dstep, with optimization of the generator parameters, referred to as the G step.

Unless otherwise specified, the following hyperparameters apply to all experiments:

• Batch size 32

• Learning rate decayed by 0.95 every 20,000 steps

• All convolutions have a 3x3 filter kernel

• Inject noise drawn from a zero centered Gaussian with stddev 0.2 after every layer of discriminator

• Dropout every layer in discriminator with keep probability of 90%

• Input noise vector is 10 dimensional sampled from a uniform distribution U(−1, 1)• We follow conventions from the DCGAN paper [36] for several aspects

– An L2 weight decay of 1e−5 is applied to all parameters– Leaky ReLUs have a leakiness parameter of 0.2– Parameters initialized from zero centered Gaussian with stddev 0.02– We use the ADAM optimizer with β1 = 0.5

B.1 USPS Experiments

The Generator and Discriminator are identical to the MNIST-M experiments.Loss weights:

• Base learning rate is 2e−4

• The discriminator loss weight is 1.0

• The generator loss weight is 1.0

• The task classifier loss weight in G step is 1.0

• There is no similarity loss between the synthetic and generated images

B.2 MNIST-M Experiments (Paper Table 1)

Generator: The generator has 6 residual blocks with 64 filters eachDiscriminator: The discriminator has 4 convolutions with 64, 128, 256, and 512 filters respectively. It has the same overallstructure as paper Figure 2Loss weights:

• Base learning rate is 1e−3

• The discriminator loss weight is 0.13

• The generator loss weight is 0.011

• The task classifier loss weight in G step is 0.01

• There is no similarity loss between the synthetic and generated images

B.3 LineMod Experiments

All experiments are run on a cluster of 10 TensorFlow workers.Generator: The generator has 4 residual blocks with 64 filters eachDiscriminator: The discriminator matches the depiction in paper Figure 2. The dropout keep probability is set to 35%.

2

Page 13: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Parameters without masked loss (Paper Table 2):

• Base learning rate is 2.2e−4, decayed by 0.75 every 95,000 steps

• The discriminator loss weight is 0.004

• The generator loss weight is 0.011

• The task classification loss weight is 1.0

• The task pose loss weight is 0.2

• The task classifier loss weight in G step is 0

• The task classifier is not trained on synthetic images

• There is no similarity loss between the synthetic and generated images

Parameters with masked loss (Paper Table 5):

• Base learning rate is 2.6e−4, decayed by 0.75 every 95,000 steps.

• The discriminator loss weight is 0.0088

• The generator loss weight is 0.011

• The task classification loss weight is 1.0

• The task pose loss weight is 0.29

• The task classifier loss weight in G step is 0

• The task classifier is not trained on synthetic images

• The MPSE loss weight is 22.9

C. InfoGAN Connection

In the case that T is a classifier, we can show that optimizing the task loss in the way described in the main text amounts toa variational approach to maximizing mutual information [2], akin to the InfoGAN model [7], between the predicted classand both the generated and the equivalent source images. The classification loss could be re-written, using conventions from[7] as:

Lt = −Exs∼Ds [Ey′∼p(y|xs) log q(y′|xs)]

− Exf∼G(xs,z)[Ey′∼p(y|xf ) log q(y′|xf )] (7)

≥ −I(y′,xs)− I(y′,xf ) + 2H(y), (8)

where I represents mutual information, H represents entropy, H(y) is assumed to be constant as in [7], y′ is the randomvariable representing the class, and q(y′|.) is an approximation of the posterior distribution p(y′|.) and is expressed in ourmodel with the classifier T . Again, notice that we maximize the mutual information of y′ and the equivalent source andgenerated samples. By doing so, we are effectively regularizing the adaptation process to produce images that look similarfor each class to the classifier T . This helps maintain the original content of the source image and avoids, for example,transforming all objects belonging to one class to look like objects belonging to another.

3

Page 14: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

Figure 2. Additional generation examples for the LineMod dataset. The left 4 columns are generated images and depth channels, while thecorresponding right 4 columns are L2 nearest neighbors.

4

Page 15: arXiv:1612.05424v1 [cs.CV] 16 Dec 2016silberman/papers/unsupervised_pixel... · 2017-07-02 · Nathan Silberman Google Research New York, NY nsilberman@google.com David Dohan Google

XMNIST-M Task Classifier

Private

conv5x5x32ReLU

Shared

Max-pool 2x22x2 stride

conv5x5x48ReLU

Max-pool 2x22x2 stride

FC 100 unitsReLU

FC 100 unitsReLU

FC 10 unitssigmoid

LineMod Task Classifier

Private

conv5x5x32ReLU

Shared

Max-pool 2x22x2 stride

conv5x5x64ReLU

Max-pool 2x22x2 stride

FC 128 unitsReLU Dropout 50%

FC 1 unitstanh

(angle)

FC 11 unitssigmoid(class)

X

Figure 3. Task classifier (T) architectures for each dataset. We use the same task classifiers as [5, 14] to enable fair comparisons. TheMNIST-M classifier is used for USPS, MNIST, and MNIST-M classification. During training, the task classifier is applied to both thesynthetic and generated images when Lsource

t is enabled (see Paper Table 5). The ’Private’ parameters are only trained on one of these setsof images, while the ’Shared’ parameters are shared between the two. At test time on the target domain images, the classifier is composedof the Shared parameters and the Private parameters which are part of the generated images classifier. These first private layers allow theclassifier to share high level layers even when the synthetic and generated images have different channels (such as 1 channel MNIST andRGB MNIST-M).

5