46
Adversarial Machine Learning Adversarial Examples Are Not Bugs, They Are Features Adversarial Learning and Wearable Computing References Doctoral Qualifying Examination Ramesh Kumar Sah Washington State University March 25, 2020

Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Doctoral Qualifying Examination

Ramesh Kumar Sah

Washington State University

March 25, 2020

Page 2: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Table of Contents

1 Adversarial Machine Learning

2 Adversarial Examples Are Not Bugs, They Are Features

3 Adversarial Learning and Wearable Computing

4 References

Page 3: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Machine Learning

Page 4: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Examples

Definition

“Adversarial examples are inputs formed by applying small butintentional perturbation to the inputs from the dataset suchthat the perturbed inputs are almost indistinguishable from thetrue inputs and results in the model outputting an incorrectanswer with high confidence” [1, p. 1].

Page 5: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Mathematically for model M trained on samples (x , y) fromdistribution D, an adversarial example is defined as

x̄ = x + δ (1)

such that for a metric m and a constant ε

M(x̄) 6= M(x)

m(x̄ − x) ≤ ε(2)

Here, δ is the adversarial perturbation computed usingadversarial attack methods. One of the norms l0, l2 or l∞ isused as the metric m.

Page 6: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Some Adversarial Examples

Figure: Image Adversarial Example. Source [1].

Figure: Time-Series Adversarial Example. Source [2].

Figure: Audio Adversarial Example. Source [3].

Page 7: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Attacks

Adversarial attacks can be divided into three categories.

1 Poisoning Attack

2 Evasion Attack

3 Exploratory Attack

Page 8: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Poisoning Attack

Adversary wants to make the model M learn falseconnection between inputs x and outputs y by corruptingthe training data.

In this mode, the adversary wants to influence the learningof the model M to make it do its bidding, without needingto alter the inputs at test time.

Page 9: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Evasion Attack

In evasion attack the adversary modifies the inputs x tofool the model M.

Evasion attack is further sub-divided into two types:

1 Untargeted Attack: In untargted attack the adversarywants to cause random misclassification. That is for anadversarial example x̄ , we have M(x̄) 6= M(x).

2 Targeted Attack: In targeted attack the adversary choosesthe target class ytar in which it wants the model M classifythe adversarial example x̄ , i.e., M(x̄) = ytar .

Page 10: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Exploratory Attack

In this setting, the adversary wants to gain as muchknowledge as possible about the target system.

Exploratory attacks are carried out when the adversarydoes not have access to the target system.

Page 11: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversary Knowledge

The difficulty in attack a target system depends on thetype of attack an adversary wants to mount and theknowledge of the target system the adversary has.

Depending on the adversary’s knowledge of the targetsystem it will operate in one of the three possibleoperating settings.

1 White-box Setting2 Gray-box Setting3 Black-box Setting

Page 12: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Figure: The x-axis represents adversarial goals in terms of different attack methods, and the y-axis shows theknowledge of the adversary. Source [4]

Page 13: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Transferability

Definition

Adversarial examples computed for a model is often alsoeffective against other independently trained models. This iscalled adversarial transferability.

Page 14: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Transferability of adversarial examples makes real-worldblack-box attacks possible.

An adversary can compute adversarial perturbation foreach inputs separately (instance-specific) or for all inputsat once (instance-agnostic).

Instance-agnostic perturbations are called universaladversarial perturbation.

Page 15: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Examples Are Not Bugs, They AreFeatures

Page 16: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Introduction

In this work [5], the authors have presented a novel theoryfor the existence of adversarial examples. They prove theirarguments with extensive set of experiments.

They separate robust and non-robust features present inthe data to train different models and do validations.

Their main results is “adversarial examples are possibledue to existence of non-robust features in the dataset”.

Page 17: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Contributions

The authors have made the following contributions in [5]:

1 Presented a new theory that explains the reasons for theexistence of adversarial examples. They have based theirtheory on the data rather than the model and learningalgorithm.

2 Linked adversarial transferability with their theory,attributing adversarial transferability to the presence ofnon-robust features that are learned my multiple models.

3 Provided explanation for some existing results and refutedtheories that attributed adversarial examples to highdimensionality of input space and finite-sample overfitting.

Page 18: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

A Simple Experiment

Consider an image of a dog taken from CIFAR-10 dataset D,

Figure: An image of dog taken from CIFAR-10 dataset. Source [5]

Page 19: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

We compute targeted adversarial examples for all (x , y) ∈ D tocreate a new dataset D̂. For example

Figure: An adversarial image of dog classified and labeled as cat. Source [5]

Page 20: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

The dataset D̂ looks completely mislabeled to humanbecause humans rely solely on robust features forclassification.

A new classifier M̂ trained on D̂ showed moderateaccuracy of 45% on the original test set from D [5].

Page 21: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

This means that

1 training inputs in D̂ are associated with their true label(present in D) solely through imperceptible adversarialperturbations and are associated with the incorrect targetlabel through all visible features

2 adversarial perturbations of standard model (M trained onD) are patterns predictive of the target class in awell-generalizing sense.

3 there exist a variety of features of the input that arepredictive of the label, and only some of these areperceptible to humans.

Page 22: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Robust and Non-Robust Features

Predictive features of the data can be split into robust andnon-robust features.

Robust features: patterns that are predictive of the truelabel even when adversarially perturbed.

Non-robust features: patterns that are predictive, but canbe flipped by an adversary to indicate wrong class.

Page 23: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Figure: The representation of robust and non-robust features. Source [5]

Page 24: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

In the original dataset D, robust and non-robust features areassociated and predictive of the true class, in this case “dog”.

Figure: An image of dog taken from CIFAR-10 dataset. Source [5]

Page 25: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

In the adversarial dataset D̂, robust features are wronglyassociated with the adversarial target class “cat” and thenon-robust features that were modified by the adversary arepredictive of the class “cat” in general.

Figure: An adversarial image of dog classified and labeled as cat. Source [5]

Page 26: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

And, deep neural networks rely on non-robust features, even inthe presence of robust features.

Figure: The complete picture of robust and non-robust features. Source [5]

Page 27: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Datasets

Next generate robust D̂R and non-robust dataset D̂NR from D.And train different machine learning models using standard lossand adversarial loss.

Figure: Generating robust and non-robust datasets. Source [5]

Page 28: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Results

Figure: Examples from different datasets and performance of models trained on these datasets. Source [5]

Page 29: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Transferability

The tendency of different architecture to learn similarnon-robust features correlates well with the transferability ofadversarial examples between them.

Figure: Transferability of adversarial examples and accuracy on original test set. Source [5]

Page 30: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Conclusion

1 Adversarial examples are the direct consequences ofnon-robust features present in the datasets.

2 Adversarial transferability occurs because different modelslearns the similar type of non-robust features from thedataset.

3 To get robust classifier we will need to enforce some prioron the learning method such that learning only relies onthe robust features. This will also help with theinterpretability of the deep neural networks.

Page 31: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Discussion I

There are some weakness of [5] that is worth discussing.

Authors in [6] showed that adversarial examples can becomputed without exploiting non-robust features. Hencethe theory used by the authors in [5] is not completelysound.

The authors have provided sound reasons connectingadversarial transferability to the existence of non-robustfeatures. But there are cases, where the data distributiondoes not necessarily share non-robust features but stillexhibit adversarial transferability [7].

Page 32: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Discussion II

Finally, the authors have not accounted for common casesin the design and training of deep neural networks. Forexample, useful non-robust features can arise from thecombinations of useful robust features and uselessnon-robust features. This shows that by just limiting theirdiscussion to individual features in the feature set, theauthors have left some important issues unaddressed.

Page 33: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Learning and Wearable Computing

Page 34: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Problems and Future Works

In the following slides I will discuss some open problems thatlies at the intersection of wearable computing and adversariallearning. I believe these problems are important to thediscussion and are also related to the suggested work [5].

Page 35: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Development of Adversarial Attack Framework forWearable Systems

Wearable systems are closed systems in the sense thatinput-output pipeline is not exposed to foreign operators.The available adversarial attack methods will not work inthis setting.

Any adversarial attack to be successful in real-life wearablesetting will need to take into account the intrinsicproperties of these systems.

Page 36: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Time-Series and Human Comprehensibility

The central premise of [5] depends on the humancomprehensibility of robust features.

But humans cannot understand time-series signals and inturn the idea of robust and non-robust features needbetter clarifications for time-series systems.

For example consider the images shown below:

Figure: Image Adversarial Example. Source [1].

Figure: Time-Series Adversarial Example. Source [2].

Page 37: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversariality in Wearable Systems

Theory that explains the reasons behind the existence ofadversarial examples and transferability in time-seriessystems.

Time-series systems does not share the idea of humancomprehensibility that is present in image classificationsystems. Therefore the idea of robust and non-robustfeatures presented in [5] will need modifications toconform with the characteristics of time-series systems.

Page 38: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Defense Methods

We need to develop defense methods that takes intoconsideration the results presented in [5]. Specifically, theidea that adversarial examples are linked to the propertiesof the dataset and not the models.

This is more applicable to time-series systems because ofthe temporal and spatial relationships in the data.

Page 39: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Thank You!

Questions ?

Page 40: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Non-robust Features

Given a learned model θ = (µ,Σ), θ defines an innerproduct over the input space represents how a change ininput affects the features learned by the classifier.

The authors uses the notion of metric to demonstrateshow the misalignment in metrics learned by the model θand used by the adversary l2 results in non-robust features.

Therefore, small change in the adversary’s metric willcause large changes under data dependent notion ofdistance established by θ.

Page 41: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Adversarial Robustness

With adversarial training the metric induced by the learnedfeatures mixes adversarial l2 metric used to computeadversarial training and the metric induced by the featuresfrom the original dataset giving robust classifier.

This means training with an l2-bounded adversary preventsthe classifier from relying heavily on features which inducea metric dissimilar to the l2 metric.

Page 42: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Figure: Source [5]

Page 43: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Generating Robust Dataset

Figure: Source [5]

Page 44: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

Generating Non-Robust Dataset

Figure: Source [5]

Page 45: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

References

Page 46: Doctoral Qualifying Examination · Evasion Attack In evasion attack the adversary modi es the inputs x to fool the model M. Evasion attack is further sub-divided into two types: 1

AdversarialMachineLearning

AdversarialExamples AreNot Bugs,They AreFeatures

AdversarialLearning andWearableComputing

References

References I

I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” CoRR,

vol. abs/1412.6572, 2014.

H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Adversarial attacks on

deep neural networks for time series classification,” 2019 International Joint Conference on NeuralNetworks (IJCNN), Jul 2019.

K. Rajaratnam and J. K. Kalita, “Noise flooding for detecting audio adversarial examples against

automatic speech recognition,” 2018 IEEE International Symposium on Signal Processing andInformation Technology (ISSPIT), pp. 197–201, 2018.

I. Goodfellow, P. McDaniel, and N. Papernot, “Making machine learning robust against adversarial

inputs,” Commun. ACM, vol. 61, p. 56–66, June 2018.

A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, “Adversarial examples are not

bugs, they are features,” in Advances in Neural Information Processing Systems 32, pp. 125–136,Curran Associates, Inc., 2019.

P. Nakkiran, “A discussion of ’adversarial examples are not bugs, they are features’: Adversarial

examples are just bugs, too,” Distill, 2019.https://distill.pub/2019/advex-bugs-discussion/response-5.

M. Naseer, S. H. Khan, H. Khan, F. S. Khan, and F. Porikli, “Cross-domain transferability of

adversarial perturbations,” 2019.